Ciklum is a Software Engineering and Solutions Company. Our 3,000+ IT professionals are located in the offices and delivery centres in Ukraine, Belarus, Poland and Spain.
As Ciklum employee, you'll have the unique possibility to communicate directly with the client when working in Extended Teams. Besides, Ciklum is the place to make your tech ideas tangible. The Vital Signs Monitor for the Children’s Cardiac Center as well as Smart Defibrillator, the winner of the US IoT World Hackathon, are among the cool things Ciklumers have developed.
Ciklum is a technology partner for Google, Intel, Micron, and hundreds of world-known companies. We are looking forward to seeing you as a part of our team!
On behalf of Ciklum Digital, we are looking for an Expert Data Engineer to join the UA team on a full-time basis. You will join a highly motivated team and will be working on a modern e-commerce solution for our client. We are looking for technology experts who want to make an impact on new business by applying best practices and taking ownership.
Client is building the next generation global on-demand delivery platform. We have grown rapidly from inception in 2011 to become the world’s largest food-ordering network and we’re now innovating and creating new verticals. Our awesome international team already operates in 40+ countries worldwide and we are looking for the most talented people to join us on our mission to ‘always deliver an amazing experience.
Client is building the next generation online food-delivery platform, with data at the center of delivering amazing food experiences.
Client founded as one of the first online food ordering portals in the region, today is the food delivery platform leader in Saudi Arabia with more than 100,000 orders per day and millions of happy customers every month.
- Focus on Goals/Delivery – ability to internalize our long/medium term vision and motivate a team to work towards our goals.
- Development & testing of new data pipelines/integrations/processes (involve programming)
- Maintenance of existing data systems (data warehouse, data pipelines, integrations, etc.)
- Sanity checks of data systems, and data quality, consistency & accuracy checks
- Performance tuning of existing data processes and pipelines
- Collecting & preparing data as per the Ad-hoc reporting/analytics requests
- Responsible for the building, deployment, and maintenance of mission critical analytics solutions that process data quickly at big data scales
- Contributes design, code, configurations, and documentation for components that manage data ingestion, real time streaming, batch processing, data extraction, transformation, and
- loading across multiple data storages.
- Owns one or more key components of the infrastructure and works to continually improve it, identifying gaps and improving the platform’s quality, robustness, maintainability, and speed.
- Cross-trains other team members on technologies being developed, while also continuously learning new technologies from other team members.
- Interacts with engineering teams and ensures that solutions meet customer requirements in terms of functionality, performance, availability, scalability, and reliability.
- Performs development, QA, and dev-ops roles as needed to ensure total end to end responsibility of solutions.
- Contribute to CoE activities and community building, participate in conferences, provide excellence in exercise and best practices.
- 6+ years of experience in a Data Engineering role.
- 3+ years of experience coding in SQL and any of Java, Python, C# or Scala, with solid CS fundamentals including data structure and algorithm design (with focus on distributed systems)
- 1+ years of hands-on implementation experience working with a combination of the following technologies:Hadoop, Map Reduce, Pig, Hive, Impala, Spark, Kafka, Storm, Presto, SQL and NoSQL data warehouses such as Hbase and Cassandra.
- Relational databases like Postgres, Mysql, Redshift, etc (advanced SQL writing and optimization skills are required)
- 1+ years of experience in cloud data platforms (AWS, GCP)
- Cloud Data Warehouses like AWS Redshift or Google Bigquery
- Engineering excellence – a proven track record of substantially impacting the development of complex distributed data pipelines and databases.
- Knowledge of containerization technologies like Kubernetes/Dockers
- Experience with Linux and shell scripting
- Knowledge of SQL and MPP databases (e.g. Vertica, Netezza, Greenplum, Aster Data)
- Knowledge of professional software engineering best practices for the full software
- Knowledge of Data Warehousing, design, implementation and optimization
- Knowledge of Data Quality testing, automation and results visualization
- Knowledge of development life cycle, including coding standards, code reviews, source control management, build processes, testing, and operations.
- Experience participating in an Agile software development team, e.g. SCRUM
- Experience designing, documenting, and defending designs for key components in large distributed computing systems
- A consistent track record of delivering exceptionally high quality software on large, complex, cross-functional projects
- Demonstrated ability to learn new technologies quickly and independently
- Ability to handle multiple competing priorities in a fast-paced environment
- Undergraduate degree in Computer Science or Engineering from a top CS program required. Masters preferred.
- Understanding of cloud infrastructure design and implementation
- Experience in backend development and deployment
- Experience in CI/CD configuration
- Good knowledge of data analysis in enterprises
- Execution – “getting things done” mentality. Ability to manage multiple projects at the same time, with high prioritization skills and experience with versioning tools.
- You are a pragmatic programmer who understands what is needed to get things done.
- You are fluent in English.
- A curious mind and willingness to work with the client in a consultative manner to find areas to improve;
- Good analytical skills;
- Good team player motivated to develop and solve complex tasks;
- Self-motivated, self-disciplined and result-oriented;
- Strong attention to details and accuracy.
What's in it for you
- A Centre of Excellence is ultimately a community that allows you to improve yourself and have fun. Our centres of excellence (CoE) bring together all Ciklumers from across the organisation to share best practices, support, advice, industry knowledge and to create a strong community.
- Close cooperation with the client;
- A constant flow of new projects;
- Dynamic and challenging tasks;
- Ability to influence project technologies;
- Projects from scratch;
- Team of professionals: learn from colleagues and gain recognition of your skills;
- European management style;
- Continuous self-improvement.
Client video presentation