Back to results: work from home online data entry / andaman and nicobar, IN
Posted on: 28 Mar 2024
Job Location: Hyderabad, IN
Job Description:
Proficiency in programming languages commonly used in data engineering, such as Python, Scala, or SQL.Experience with Big Data technologies and frameworks such as Druid, Spark, Kafka, Hive, or similar.
Familiarity with distributed computing principles and distributed file systems.Expertise in designing and implementing data pipelines for ETL (Extract, Transform, Load) processes.
Knowledge of SQL and NoSQL databases, including but not limited to Oracle, MongoDB, PostgreSQL and MySQLUnderstanding of data modeling and database design principles.
Ability to process large volumes of data efficiently and effectively.Ability to process real time data efficiently and effectively.
Experience in implementing data lake solutions either on-premises or on CloudExpert in handling high volumes of data ingestion and distributed computing using Apache Druid and PySparkPossess foundational knowledge of Cloud computing (any one cloud provider AWS / AZURE / CP)Strong programing skills on pythonExperience and understanding of overall bigdata ecosystemGood experience on Linux operating systemHaving good knowledge of distributed computing concepts and the ability to design and implement scalable algorithms for distributed environments.
By clicking on "Continue", I give receptix consent to process my data and to send me email alerts, as detailed in receptix's Privacy Policy . I may withdraw my consent or unsubscribe at any time.
Similar jobs
Software Engineer(BIG DATA)
Hyderabad, IN
28 Mar 2024