Qualifications: Preferred skills and experience include: Bachelor's degree in maths, statistics computer science, information management, finance or economicsAt least 3 years' experience integrating data into analytical platforms using patterns like API, files, XML, json, flatfiles, Hadoop file formats, and Cloud file formats.Experience in ingestion technologies (e.g. sqoop, nifi, flume), processing technologies (Spark/Scala) and storage (e.g. HDFS, HBase, Hive) are essentialExperience in designing and building data pipelines using Cloud platform solutions and native tools.Experience in Python, JVM-compatible languages, use of CICD tools like Jenkins, Bitbucket, Nexus, SonarqubeExperience in data profiling, source-target mappings, ETL development, SQL optimisation, testing and implementation.Expertise in streaming frameworks (Kafka/Spark Streaming/Storm) essentialExperience managing structured and unstructured data typesExperience in requirements engineering, solution architecture, design, and development / deploymentExperience in creating big data or analytics IT solutionTrack record of implementing databases and data access middleware and high-volume batch and (near) real-time processing Job Description: Implement request for ingestion, creation, and preparation of data sourcesDevelop and execute jobs to import data periodically/ (near) real-time from an external sourceSetup a streaming data source to ingest data into the platformDelivers data sourcing approach and data sets for analysis, with activities including data staging, ETL, data quality, and archivingDesign a solution architecture on both On-premises and Cloud platforms to meet business, technical and user requirementsProfile source data and validate fit-for-purposeWorks with Delivery lead and Solution Architect to agree pragmatic means of data provision to support use casesUnderstands and documents end user usage models and requirements