|Date Posted||September 9, 2019|
This is a key role that will be accountable for the development and operations of the Finance Data Solution to drive maximum value from data for Finance users and in line with company best practices. You will work as part of a cross-functional agile delivery team, including analysts, architects, big data engineers, machine learning engineers, and testers.
You will have the opportunity to work on complex problems, designing performant data systems which align to business expectations and use cases.
You will be working closely with the businesses to build a new large-scale, cloud hosted, secure, and consolidated data and analytics platform to enable users to interact with all their data from a single trusted platform. The platform leverages the economics of big data, cloud elasticity, Machine Learning (ML)/Artificial Intelligence (AI) automation, and permissioned data sharing to turn information into business insights and address business and operational challenges.
- Work to ensure data models and architectural patterns are aligned to user and business requirements.
- Design and lead the implementation of new core data platforms to help with the adoption of modern Data solutions.
- Champion and consult with other teams to advocate the main function and the architectural solutions being generated.
- Work closely with the data engineering team to design and implement enterprise-grade pipelines for data.
- Experience with data modelling (star, snowflake, etc schemas) and performance optimisation of data tables.
- Experience with working with internal customers to understand their use cases and ensure architecture conforms to their needs.
- Experience with both analytical and transactional processing databases
- Experience building enterprise grade data pipelines (ETL and ELT)
- Experience with Cloud-native data services, such as AWS Athena, Redshift, Kinesis/Managed Kafka, DynamoDB, Glue, Lambda, S3
- Experience building and championing adoption of common architectural patterns and driving their adoption
- Solid understanding of Enterprise patterns and applying best practices when integrating various inputs and outputs together at scale.
- Knowledge of software best practices, like Test-Driven Development (TDD) and Continuous Integration (CI)
- Decent knowledge of NoSQL and Big Data tools (Hadoop, Hive, MongoDB, DynamoDB, Presto)
- Understanding of DevOps principles, tools, and the intersection with cloud architecture.
- Experience on an Agile Environment, familiarity with Jira, Confluence and Git.
- Good understanding of the principles of data management, process and delivery.
- Experience interlocking with a Data Governance programme of work
- Any experience with Data Architecture around GDPR compliance is a nice-to-have
- Understanding of insurance value / supply chain.
- Great problem-solving skills, and the ability and confidence to hack their way out of tight corners.
- Ability to prioritise and meet deadlines.
- Conscientious, self-motivated, and goal orientated.
- Excellent attention to detail.
- Willingness and an enthusiastic attitude to work within existing processes / methodologies.
For more information on this role, including company details, salary and location, please get in contact with ALEX from Jefferson Frank on 0191 814 7445 or email [Click Here to Email Your Resumé]
AWS, DevOps, Big Data, ETL, ELT, Data Architecture