Big Data Engineer

at

Arthur Grand Technologies Inc

Columbia, MD
Full Time
3y ago

Company Description

"Arthur Grand Technologies Inc. has been approved as a National Minority Supplier Development Council (NMSDC) Certified Minority Business Enterprise (MBE)"

Arthur Grand Technologies is in the business of providing staffing and technology consulting services . We have doubled our revenue year over year for the past 5 years. This speaks to the long lasting relationship and customer satisfaction that we have built in this short span of time. Our company is managed by a team of professionals who worked for big 5 consulting firms for 20+ years We are a minority owned staff augmentation and technology consulting company

Job Description

Position: Big Data Engineer

Location: Preference for all candidates to work from our Columbia, MD office location. Hybrid approach: 2-3 days in the office; 2-3 days WFH;

For a perfect candidate, full remote option may be possible; however, the candidate must live in MD, VA, or DC

Duration: Long Term

 

Responsibilities

  • Extract and integrate data from multiple data systems and organize data in a format that can be easily read by human or machine.
  • Transition of legacy ETLs with Java and Hive queries to Spark ETLs.
  • Design and develop data processing solutions and custom ETL pipelines for varied data formats like parquet and Avro.
  • Design, develop, test and release ETL mappings, mapplets, workflows using Streamsets, Java MapReduce, Spark and SQL.
  • Performance tuning of end-to-end ETL integration processes.
  • Analyze and recommend optimal approach for obtaining data from diverse source systems.
  • Work closely with the data architects and interface with business stakeholders to understand requirements and offer solutions.

 

Preferred Qualifications

  • BS in Information Technology, Computer Science, Software Engineering or related field.
  • Understanding of distributed computing principles and hands-on experience in Big Data Analytics and development.
  • Good knowledge of Hadoop and Spark ecosystems including HDFS, Hive, Spark, Yarn, MapReduce and Sqoop.
  • 4+ years of Big Data development experience.
  • 3+ years of working experience in ETL development and functional programming knowledge preferably with Scala, Spark, Java, Python, R.
  • 3+ years of experience tuning Spark/Java coding, SQL and No SQL.
  • AWS development using big data technologies preferred.
  • AWS cloud-certified, Databricks, and Snowflake experience a plus.

Additional Information

All your information will be kept confidential according to EEO guidelines.

Apply for this job

Click on apply will take you to the actual job site or will open email app.

Click above box to copy link
Copied
Get exclusive remote work stories and fresh remote jobs, weekly 👇
View all remote jobs
Onkar By: Onkar