Radu Gheorghiu

Radu Gheorghiu

Mentor
Rising Codementor
US$15.00
For every 15 mins
ABOUT ME
I am curious to know the world and I like to use data to explain it
I am curious to know the world and I like to use data to explain it

I am a Software Engineer who enjoys working with data. Be it storage, administration, querying, analyzing, processing, I believe data is the best tool we have in answering all questions.

This is why I have devoted myself to learning and touching all aspects of the software ecosystem that works and handles data, for diverse reasons.

I enjoyed starting my career as a Report Developer, learning how to display data and transform it into useful information.

Then I moved on to the admin side of things to tackle problems and solve them so that the data is stored correctly and is easily accessible and fast.

My next challenge would be a step up from the Reporting role to a modern way of working with data into data analytics, data science and machine learning.

I always try to keep myself in the loop with the newest technologies from these 3 categories which I believe create the data lifecycle:

  • acquire and store data
  • clean, process and analyze data
  • consume the data (either through direct visualization in a report or as a result obtained from a ML model)
English
Bucharest (+02:00)
Joined August 2020
EXPERTISE
10 years experience
6 years experience
7 years experience
5 years experience

REVIEWS FROM CLIENTS

Radu's profile has been carefully vetted and approved as a Codementor. Connect with Radu now, and leave a review for them once you're done!
SOCIAL PRESENCE
GitHub
ai21labs-hackathon
ai21labs-hackathon
Python
2
0
riley-goodside-scale-ai-event
1
0
Stack Overflow
20389 Reputation
16
75
110
EMPLOYMENTS
Data Engineer
Accesa
2020-05-01-Present
I work as a Data Engineer on an IoT project, where measurements data from pools are read from multiple transactional databases into a Del...
I work as a Data Engineer on an IoT project, where measurements data from pools are read from multiple transactional databases into a Delta Lake in Azure. The Databricks Delta Lake contains multiple Databricks notebooks with Spark code that ingest data from transactional MySQL databases and process this data to output information used in PowerBI reports. I handle building the ETL pipelines and job schedules, creating the data sources that are needed for the PowerBI reports and also help with creating and deploying the PowerBI reports in the online service.
Azure
Python 3
Apache Spark
View more
Azure
Python 3
Apache Spark
Power BI
View more
Data Architect
Accesa
2019-07-01-2020-05-01
Worked as a Data Architect to create the data model, ETL processes and outlined the documentation template for migrating data stored in a...
Worked as a Data Architect to create the data model, ETL processes and outlined the documentation template for migrating data stored in an old DBASE database to SQL Server 2016. Created custom T-SQL Scripts and migration pipelines automated with Azure DevOps that allowed versioned T-SQL scripts to be ran and create specific versions of databases for the application developers. During my time on the project I outlined the general data model, set up documentation best practices using Enterprise Architect, created T-SQL templates that could be called from SQLCMD, templates which could be "injected" with specific table-based migration logic. I handled discussions with the client, discovery sessions and laid out the general plan and major milestones that needed to be hit in order for the project to be able to continue in a straightforward way, without much further involvement from my side.
Microsoft SQL Server
View more
Microsoft SQL Server
View more
Data Scientist
Accesa
2018-10-01-2019-07-01
Worked as a Data Scientist and led a team of 3 Master's students in developing data analysis pipelines and doing data analysis on data fr...
Worked as a Data Scientist and led a team of 3 Master's students in developing data analysis pipelines and doing data analysis on data from a factory's production line. The end goal was to create a prediction model to predict faults in the production line so that preventative maintenance could be done in order to minimize the number of faulty parts at the end of the production cycle. The end goal has not been achieved during my time on the project due to the low quality of the data. However, important steps were made in analyzing the data, creating the data model and ETL processes for reading, processing and storing the data from the original .CSV files streamed to our system to a MongoDB based data-store.
MongoDB
NumPy
Pandas
View more
MongoDB
NumPy
Pandas
Python 3
TensorFlow
Keras
View more