8 years experience | 3 endorsements
With a strong foundation in Java and its enterprise frameworks (J2EE, Java EE), I have extensive experience building scalable, maintainab...
With a strong foundation in Java and its enterprise frameworks (J2EE, Java EE), I have extensive experience building scalable, maintainable, and efficient applications. My skills span across various Java technologies, from developing core backend logic to integrating with different data sources and third-party services.
Spring has been a cornerstone of my development work, especially in the realm of microservices and cloud-based architectures. I have leveraged Spring Framework’s Dependency Injection and Spring MVC for building modular, maintainable applications. More recently, I have been using Spring Boot to quickly develop production-ready applications, taking advantage of its auto-configuration, embedded servers, and seamless integration with databases and messaging systems. This has allowed me to deliver high-quality services faster and with minimal configuration.
In addition to my work with application development, I have worked extensively with the Hadoop ecosystem for handling large datasets. I’ve utilized HDFS for distributed storage, and MapReduce for processing big data in parallel across clusters. On top of Hadoop, I’ve integrated Apache Spark to run complex data pipelines, which enabled faster data processing compared to traditional MapReduce jobs. Spark’s capabilities in both batch and stream processing have helped me build real-time analytics solutions and efficiently manage large-scale data workflows. As a part of my data engineering work, I have managed several data processing pipelines on AWS using Elastic MapReduce (EMR). By utilizing EMR’s managed Hadoop and Spark clusters, I have been able to scale data processing jobs on demand while optimizing for both performance and cost. I’ve also automated cluster provisioning, data storage with S3, and the orchestration of data pipelines using AWS Glue. Leveraging AWS EMR for big data processing has allowed my team to handle massive datasets more efficiently, without the overhead of managing on-premise infrastructure.