Data Analyst

Data Analyst is responsible for collecting, preparing, and analyzing data to extract meaningful insights. They discover how data can be used to answer questions and solve problems. Data Analysts may be responsible for creating dashboards, designing and maintaining relationship databases (SQL) and systems for different departments throughout their organization using business intelligence software (Power BI), Tableau and programming (Python).

I am a certified DataCamp Data Analyst Associate. As a Data Analyst Associate I can demonstrate that I have the knowledge, skills, and abilities to succeed at the entry level in this role. The competency domains assessed included, but were not limited to:

  • Data Management
  • Exploratory Analysis
  • Statistical Experimentation
  • Communication and Visualization


As a Data Analyst in Python, SQL, Tableau, Power BI or Spreadsheets I can:

  • analyze data in spreadsheets,
    data validation, calculate averages and analyze data in spreadsheets,
  • build dashboards to track financial securities performance, measure reward and risk indicators, and create investment models in spreadsheets,
  • import, clean, manipulate, and visualize data in Power BI,
    write basic SQL queries,
  • group and aggregate data to produce summary statistics,
  • join tables and apply filters and sub-queries,
  • write functions to explore and manipulate data,
  • import, clean, manipulate and visualize data with some of the most popular Python libraries, including pandas, NumPy, Seaborn, and many more.
Data Connectors

My Python, SQL, Tableau, Power BI and Spreadsheets Learning Path

With Data Camp, Coursera and Microsoft Learn I build my skills and experience and validate my knowledge:

Data Analyst with Python (Datacamp) 36 hours (skill track ⇒ certificate)

In this course I began my data analyst training with interactive exercises and get hands-on with some of the most popular Python libraries, including pandas, NumPy, Seaborn, and many more. I learned why Python for data analysis is so popular and worked with real-world datasets to grow my data manipulation and exploratory data analysis skills. I also learned key statistics skills, like hypothesis.

Python is a general-purpose programming language that is becoming ever more popular for data science. Companies worldwide are using Python to harvest insights from their data and gain a competitive edge. This course focused on Python specifically for data science. I learned about powerful ways to store and manipulate data, and helpful data science tools to begin conducting my own analyses.

In this course I discovered how dictionaries offer an alternative to Python lists, and why the pandas dataframe is the most popular way of working with tabular data. In the second chapter of this course, I found out how I can create and manipulate datasets, and how to access them using these structures.

pandas is the world’s most popular Python library, used for everything from data manipulation to data analysis. In this course, I learned how to manipulate DataFrames, as I extract, filter, and transform real-world datasets for analysis. Using pandas I explored all the core data science concepts. Using real-world data, including Walmart sales figures and global temperature time series, I learned how to import, clean, calculate statistics, and create visualizations—using pandas to add to the power of Python.

pandas is a crucial cornerstone of the Python data science ecosystem, with Stack Overflow recording 5 million views for pandas questions. In this course I learned how to handle multiple DataFrames by combining, organizing, joining, and reshaping them using pandas. I worked with datasets from the World Bank and the City Of Chicago. I finished the course with a solid skillset for data-joining in pandas.

Statistics is the study of how to collect, analyze, and draw conclusions from data. It’s a hugely valuable tool that I can use to bring the future into focus and infer the answer to tons of questions. In this course, I discovered how to answer questions like these as you grow your statistical skills and learn how to calculate averages, use scatterplots to show the relationship between numeric values, and calculate correlation. I also learned how to tackle probability, the backbone of statistical reasoning, and learned how to use Python to conduct a well-designed study to draw my own conclusions from data.

Seaborn is a powerful Python library that makes it easy to create informative and attractive data visualizations. In this course I learned how to explore this library and create Seaborn plots based on a variety of real-world data sets, including exploring how air pollution in a city changes through the day and looking at what young people like to do in their free time. This data will gave me the opportunity to find out about Seaborn’s advantages first hand, including how I can easily create subplots in a single figure and how to automatically calculate confidence intervals.

Exploratory data analysis is a process for exploring datasets, answering questions, and visualizing results. This course presented the tools I you need to clean and validate data, to visualize distributions and relationships between variables, and to use regression models to predict and explain. I explored data related to demographics and health, including the National Survey of Family Growth and the General Social Survey. But the methods I learned apply to all areas of science, engineering, and business. I used Pandas, a powerful library for working with data, and other core Python libraries including NumPy and SciPy, StatsModels for regression, and Matplotlib for visualization.

Sampling in Python is the cornerstone of inference statistics and hypothesis testing. It’s a powerful skill used in survey analysis and experimental design to draw conclusions without surveying an entire population. In this Sampling in Python course, I discovered when to use sampling and how to perform common types of sampling—from simple random sampling to more complex methods like stratified and cluster sampling. I also learned how to estimate population statistics and quantify uncertainty in my estimates by generating sampling distributions and bootstrap distributions.

Hypothesis testing lets me answer questions about my datasets in a statistically rigorous way. In this course, I learned how and when to use common tests like t-tests, proportion tests, and chi-square tests. Working with real-world data, including Stack Overflow user feedback and supply-chain data for medical supply shipments, I learned gain a deep understanding of how these tests work and the key assumptions that underpin them. I also discovered how non-parametric tests can be used to go beyond the limitations of traditional hypothesis tests.

Data Analyst in SQL (Datacamp) 39 hours (skill track ⇒ certificate)

I learned the fundamentals of database design and how to: write basic SQL queries, group and aggregate data to produce summary statistics, join tables and apply filters and sub-queries, write functions to explore and manipulate data.

In this course I learned how to choose the best visualization for my dataset, and how to interpret common plot types like histograms, scatter plots, line plots and bar plots. I also learned about best practices for using colors and shapes in my plots, and how to avoid common pitfalls.

Statistics are all around us, from marketing to sales to healthcare. The ability to collect, analyze, and draw conclusions from data is not only extremely valuable, but it is also becoming commonplace to expect roles that are not traditionally analytical to understand the fundamental concepts of statistics.

I learned how to structure and query relational databases using SQL.

I learned how to filter and compare data, how to use aggregate functions to summarize data, how to sort and group data, how to present data cleanly using tools such as rounding and aliasing.

In this course, I learned how to work with more than one table in SQL, use inner joins, outer joins and cross joins, leverage set theory, including unions, intersect, and except clauses, create nested queries.

I learned the robust use of CASE statements, subqueries, and window functions—all while discovering some interesting facts about soccer using the European Soccer Database.

I learned how to create queries for analytics and data engineering with window functions, the SQL secret weapon! Using flights data, I discovered how simple it is to use window functions, and how flexible and efficient they are.

I learned the most important PostgreSQL functions for manipulating, processing, and transforming data.

I used functions to aggregate, summarize, and analyze data without leaving the database. I learned common problems to look for and strategies to clean up messy data. I also learned how to exploring your own PostgreSQL databases and analyzing the data in them.

In this course, I learned how to use SQL to support decision making. I learned to apply SQL queries to study for example customer preferences, customer engagement, and sales development. This course also covered SQL extensions for online analytical processing (OLAP), which makes it easier to obtain key insights from multidimensional aggregated data.

In this course, I learned how to use storytelling to connect with my audience and help them understand the content of my presentation—so they can make the right decisions. I also learned the advantages and disadvantages of oral and written formats. I improved how I translate technical results into compelling stories, using the correct data, visualizations, and in-person presentation techniques.

Data Analyst in Tableau (Datacamp) 42 hours (skill track ⇒ certificate)

In this course I learned how to use Tableau’s features to clean, analyze, and visualize data. Additionally, I learned how to connect data, create impactful, presentation-ready data visualizations, and familiarize yourself with the feature of Tableau and how I can use them to my advantage. Finally I learned how to leverage advanced calculations and apply statistical techniques. 

In this track, I learned how to navigate Tableau’s interface and connect and present data using easy-to-understand visualizations. I also learned how to confidently explore Tableau and build impactful data dashboards.

In this course, I learned how to create detail-rich map visualizations, configure date and time fields to show trends over time, and extend my data using Calculated Fields. I also learned how to complete a customer analytics case study and how to create bins, customize filters and interactions, and apply quick table calculations. Finally, I learned power user techniques, including how to slice and dice data.

In this course, I learned how to apply dashboard-composition best practices, add interactive or explanatory elements, and use dashboard actions to make your dashboard interactive. Additionally, I learned how to modify an existing dashboard layout for mobile devices to share as an image or a PDF. Finally, I learned how to share my data story through Tableau’s story functionality.

In this Tableau case study, I investigated a dataset from an example telecom company called Databel and analyzed their churn rates. I created calculated fields and various visualizations in Tableau, such as dual-axis graphs and scatter plots. I made my graphs dynamic by using filters and parameters, and combine everything into a story to share my insights.

In this course, I learned how to use connectors in Tableau to create a live connection to CSV and Excel files. I also learned how to combine multiple data tables with joins, unions, and relationships. Finally, I learned to manage different data properties, like renaming data fields, assigning aliases, changing data types, and changing default properties for a data field.

Data visualization is one of the most desired skills for data analysts, allowing them to communicate insights in an understandable and impactful way. This course covered a range of data visualization skills using Tableau, allowing me to better present my findings.

In this course, I learned how to create calculations in Tableau to bring my visualizations to the next level. In this interactive course, I learned how to use functions for my Tableau calculations and when I should use them. I also learned how to solve business problems using Tableau, including cohort and survival analyses, prepare a what-if scenario with a dynamic quadrant chart, and how to troubleshoot my calculations.

In this Tableau case study, I explored a real-world job posting dataset to uncover insights for a fictional recruitment company called DataSearch.I learned from previous courses,used visualization techniques to investigate the data to find out what skills are most in-demand for data scientists, data analysts, and data engineers.

I learned how to perform univariate and bivariate exploratory data analysis and create regression models to spot hidden trends. Working with real-world datasets, I also used machine learning techniques such as clustering and forecasting.

Data Analyst in Power BI (Datacamp) 51 hours (skill track ⇒ certificate)

In this course I learned how to import, clean, manipulate, and visualize data in Power BI—all critical skills for any aspiring data professional. Through hands-on exercises, I learned data analysis best practices and discover a world of Power BI functionalities, including data modeling, DAX, Power Query, and many others.

In this course, I learned how to use this popular business intelligence platform through hands-on exercises. Before diving into creating visualizations using Power BI’s drag-and-drop functionality, I first learned how to confidently load and transform data using Power Query and the importance of data models. I also learned to drill down into reports and make your reports fully interactive.

I learned fundamental concepts and best practices for implementing DAX in my reports. I learned to write DAX code to generate calculated columns, measures, and tables while learning supporting knowledge around ‘context’ in Power BI. Finally, I rounded off the course by introducing time-intelligence functions and show me how to use Quick Measures to create complex DAX code.

In this Power BI course, I learned to create insightful visualizations through built-in and customized charts and conditional formatting. I discovered how to create a plethora of visualizations such as scatter plots, tornado charts, gauges, and how to visualize everything without overwhelming my audience.

In this Power BI case study, I learned investigate a dataset from an example telecom company called Databel and analyze their churn rates. Analyzing churn doesn’t just mean knowing what the churn rate is: it’s also about figuring out why customers are churning at the rate they are, and how to reduce churn. I answered these questions by creating measures and calculated columns, while simultaneously creating eye-catching report pages.

In this interactive Power BI course, I learned how to use Power Query Editor to transform and shape your data to be ready for analysis. I also learned how to grips with various text and numerical transformations, including multiplication, rounding, and split and merge text columns, to help me become even more efficient in data preparation.

In this course, I learned all about table transformations in Power BI. I learned how to (un)pivot, transpose, and append tables. I also got introduced to joins and discover when it makes sense to use them. Finally, I gained power with custom columns, including how to use M language and the Advanced Editor, to help me be even more efficient in data preparation.

In this course I learned how to explore a toolbox of data cleaning, shaping, and loading techniques, which I can apply to my data. I also learned how to choose between Power Query and Power BI, and discovered the foundations of data modeling by going into star and snowflake schemas.

In this course, I extended my knowledge about facts, dimensions, and their relationships. I learned about the cardinality of relationships and how I can use bi-directional cross-filtering in my model. I also explored the use of quick measures and hierarchies and write DAX to fully customize my data model. Finally, I got introduced to Power BI reporting best practices to improve the performance of my reports.

In this Power BI case study, I learned exploring a dataset for a fictitious software company called Atlas Labs. This course focused on helping me import, analyze and visualize Human Resources data in Power BI. I learned how to effectively work with Power BI using example data. I carried out exploratory data analysis and used DAX to help build powerful visualizations. I finished my analysis by diving deeper into attrition and what factors impact attrition.

DAX, or Data Analysis eXpressions, is a formula language used in Microsoft Power BI to create calculated columns, measures, and custom tables. Once mastered, DAX gave me powerful control over visuals and reports, allowing for better performance and more flexibility. This course covered the core concepts such as row query and filter context, with exercises focusing on filtering, counting, ranking, and iterating functions.

This course introduced me to new DAX functions and its many use cases. First of all, I expanded my core DAX knowledge by learning how to write logical functions. Secondly, I discovered how I can write DAX functions for row-level security (RLS) purposes and how to use DAX to manipulate tables and create nested functions.

In this course I learned how to design with users in mind. I also learned to use R and Python to create unique visualizations, adding custom chart types that would otherwise not be available in Power BI. I got introduced to some best practices in data visualization and optimize my visualizations to be more accessible to visually impaired individuals.

I started by using descriptive statistics to spot outliers, identify missing data, and apply imputation techniques to fill gaps in your dataset. I also learned how EDA in Power BI can help me discover the relationships between variables—both categorical and continuous— by using basic statistical measures and box and scatter plots.

In this course, I learned how to analyze time series, visualize your data, and spot trends. I also built new date variables, discover run charts, and get into calculating rolling averages. Finally, found out how to identify which variables exhibit the most influence on the target variable using Power BI’s decomposition trees and key influencers.

In this advanced course, I learned complex alternative data storytelling techniques to simply building dashboards, including buttons and bookmarks to create more interactive visualizations. Customized the user experience by drillthrough filters and emoji, and learned how to tweak the Q&A feature for personalized reports.

I learned practical techniques for incorporating DAX measures and calculations in my reports—empowering users to filter, highlight values, and group data effectively. Through hands-on exercises, I also learned how to progressive disclosure, a user experience (UX) technique to make reporting easier before discovering how to change report themes and optimize them for mobile users.

In this course, I learned the differences between Power BI Desktop and Power BI Service when it comes to data connections. I also learned about the different ways that Power BI stores data when connecting to a source, and how to able to amend connections after they have been made. Finally, I learned how to use parameters and M Language in Power BI Desktop to level up my handling of data import processes.

I learned how to securely report access by managing access to datasets, implementing row-level security, or applying sensitivity labels to prevent unauthorized data re-use or exfiltration. I also discovered how to promote and certify content in Power BI before learning how to save time by subscribing to reports and setting up data alerts—making it easy to keep on top of changes to data in my reports.

Spreadsheet Fundamentals (Datacamp) 17 hours (skill track ⇒ certificate)

In this course I learned the fundamental skills necessary to analyze data in spreadsheets: data analysis, manipulation, and visualization and learn how to analyze and visualize data efficiently and effectively. I understood the core functionality of spreadsheets, exploring how to use predefined functions to analyze data. I also learned about data types, manipulation of numeric and logical data, missing data, and error types.

In this course I learned the fundamentals of spreadsheets by working with tabular data and performing calculations. I also created my own formulas and learned how to use references to connect cells and bring my spreadsheets to life.

I learned how to clear and analyze data in spreadsheets, discover how to use built-in functions to sort and filter data, and use the VLOOKUP function to combine data from different tables.

In this course I dived deeper into data types, practice manipulating numeric and logical data, explore missing data and error types, and calculate some summary statistics. I also learned how to explore datasets on 100m sprint world records, asteroid close encounters, benefit claims, and butterflies.

In this course I explored the world of Pivot Tables within Google Sheets, and learned how to quickly organize thousands of datapoints with just a few clicks of the mouse. I also analyzed the Average rainfall across multiple US cities, the Top 10 of the Fortune Global 500, and a selection of Films released between 2010 and 2016. After that I learned techniques such as sorting, subtotaling, and filtering my data using these real world examples.

I learned how to create common chart types like bar charts, histograms, and scatter charts, as well as more advanced types, such as sparkline and candlestick charts. I also looked at how to prepare my data and use Data Validation and VLookup formulas to target specific data to chart. After that I learned how to use Conditional Formatting to apply a format to a cell or a range of cells based on certain criteria, and finally, how to create a dashboard showing plots and data together.


Intermediate Spreadsheets (Datacamp) 12 hours (skill track ⇒ certificate)

In this courde I learned time-saving methods such as data validation and gain a foundation of statistical concepts to quickly calculate averages and analyze data. I expanded my spreadsheet mastery and create impressive visualizations, including histograms, scatter plots, and bar plots. I also leveraged my data validation skills and more advanced techniques such as regular expressions to create my own marketing dashboard using real-world digital marketing data.

Statistics is the science that deals with the collection, analysis, and interpretation of data. I this course I used Spreadsheets functions, I divided into averages, distributions, hypothesis testing, and conclude the course by applying my newfound knowledge in a case study.

  • Error and Uncertainty in Spreadsheets (course ⇒ certificate)

In this course I made some predictions myself, learned to distinguish real differences from random noise, and explored psychological crutches we use that interfere with our rational decision making. I also uncovered patterns in Seattle crime data, predicted students’ final grades, prevented Nashville traffic accidents, and determined whether a bakery’s menu needs to change.

  • Marketing Analytics in Spreadsheets (course ⇒ certificate)

Spreadsheets are an essential tool for any marketing professional. Data validation and regular expressions are powerful tools for marketing analysts. In this course I visualized data by building charts. I also explored a dataset that included the kind of information I encountered in the world of digital marketing. I spoted errors in metrics using data validation, used regular expressions to aggregate campaign metrics, build charts to analyze campaign performance, and used everything I learned to build a dynamic dashboard.

Data Science Math Skills (Duke University, Coursera) 13 hours (course) (certificate)

In this course I learned the basic math I needed in order to be successful in almost any data science math course. Data Science Math Skills introduced the core math that data science is built upon, with no extra complexity, introducing unfamiliar ideas and math symbols one-at-a-time. I mastered the vocabulary, notation, concepts, and algebra rules that all data scientists must know before moving on to more advanced material. Topics included:

  • Set theory, including Venn diagrams
  • Properties of the real number line
  • Interval notation and algebra with inequalities
  • Uses for summation and Sigma notation
  • Math on the Cartesian (x,y) plane, slope and distance formulas
  • Graphing and describing functions and their inverses on the x-y plane,
  • The concept of instantaneous rate of change and tangent lines to a curve
  • Exponents, logarithms, and the natural log function.
  • Probability theory, including Bayes’ theorem. 


Duke University has about 13,000 undergraduate and graduate students and a world-class faculty helping to expand the frontiers of knowledge. The university has a strong commitment to applying knowledge in service to society, both near its North Carolina campus and around the world.

⇒ Verify at: Coursera

Excel Skills for Data Analytics and Visualization (Macquarie University, Coursera) 47 hours (course) (certificate)

In this course I learned how to bring data to life using advanced Excel functions, creative visualizations, and powerful automation features. These courses equiped me with a comprehensive set of tools for transforming, linking, and analysing data. I mastered a broad range of charts and create stunning interactive dashboards. Finally, I explored a new dimension in Excel with PowerPivot, Get and Transform, and DAX. Harnessing the power of an underlying database engine, I removed the 1,048,576 row limitation, completely automate data transformation, create data models to effectively link data, and open the gateway to Power Business Intelligence.


Macquarie University is ranked among the top one per cent of universities in the world, and with a 5-star QS rating, they are recognised for producing graduates who are among the most sought-after professionals in the world. Since their foundation 54 years ago, they have aspired to be a different type of university: one focused on fostering collaboration between students, academics, industry and society.

⇒ Verify at: Coursera

Improving Your Statistical Questions (Eindhoven University of Technology, Coursera) 17 hours (course) (certificate)

This course aimed to help me to ask better statistical questions when performing empirical research. I learned how to design informative studies, both when my predictions are correct, as when my predictions are wrong. I learned how to question norms, and reflect on how I can improve research practices to ask more interesting questions. In practical hands on assignments I learned techniques and tools that can be immediately implemented in my own research, such as thinking about the smallest effect size my are interested in, justifying my sample size, evaluate findings in the literature while keeping publication bias into account, performing a meta-analysis, and making my analyses computationally reproducible.


Eindhoven University of Technology (TU/e) is a young university, founded in 1956 by industry, local government and academia. Today, their spirit of collaboration is still at the heart of the university community. They foster an open culture where everyone feels free to exchange ideas and take initiatives. They offer academic education that is driven by fundamental and applied research. Their educational philosophy is based on personal attention and room for individual ambitions and talents. Their research meets the highest international standards of quality. The University push the limits of science, which puts their at the forefront of rapidly emerging areas of research.

⇒ Verify at: Coursera

Process Mining Data science in Action (Eindhoven University of Technology, Coursera) 22 hours (course) (certificate)

The course explained me the key analysis techniques in process mining. I learned various process discovery algorithms. These can be used to automatically learn process models from raw event data. Various other process analysis techniques that use event data presented. Moreover, the course provided easy-to-use software, real-life data sets, and practical skills to directly apply the theory in a variety of application domains. This course started with an overview of approaches and technologies that use event data to support decision making and business process (re)design. Then the course focused on process mining as a bridge between data mining and business process modeling. The course was is at an introductory level with various practical assignments.


Eindhoven University of Technology (TU/e) is a young university, founded in 1956 by industry, local government and academia. Today, their spirit of collaboration is still at the heart of the university community. They foster an open culture where everyone feels free to exchange ideas and take initiatives. They offer academic education that is driven by fundamental and applied research. Their educational philosophy is based on personal attention and room for individual ambitions and talents. Their research meets the highest international standards of quality. The University push the limits of science, which puts their at the forefront of rapidly emerging areas of research.

⇒ Verify at: Coursera

Google Data Analytics (Google, Coursera) 186 hours (course) (certificate)

Data analytics is the collection, transformation, and organization of data in order to draw conclusions, make predictions, and drive informed decision making.

Over 8 courses, I gained in-demand skills that prepared me for an entry-level job. I learned from Google employees whose foundations in data analytics served as launchpads for their own careers.

⇒ Verify at: Coursera

My badges:

Some articles about Data Analytics