BA/BS job offer in computer science at Pepsico

BA/BS job offer in computer science at Pepsico

Insight:

Pepsico is recruiting an experienced analyst at their Hyderabad. As an Analyst, Data Modeling, your goal would be to partner with D&A Data Foundation team members to create data models for global projects. This would include i analyzing project data needs, identifying data storage and integration needs/issues, and creating opportunities for data model reuse, meeting project requirements. The role will champion enterprise architecture, data design and D&A standards and best practices. You will perform all aspects of data modeling in close collaboration with data governance, data engineering and data architects teams.

The complete details of this work are as follows:-

Roles and responsibilities:

The ideal candidate must be able to:

Complete conceptual, logical, and physical data models for any supported platform, including SQL Data Warehouse, EMR, Spark, DataBricks, Snowflake, Azure Synapse, or other cloud data warehousing technologies.

Governs data design/modeling – documentation of metadata (business definitions of entities and attributes) and constructs of database objects, for core and investment-funded projects, as assigned.

Support data analysis, requirements gathering, solution development and design reviews for enhancements or new applications/reports.

Support assigned project contractors (both on-site and overseas), orienting new contractors to standards, best practices and tools.

Contributes to project cost estimates, working with senior team members to assess the size and complexity of changes or new development.

Ensure that physical and logical data models are designed with an extensible philosophy to support future unknown use cases with minimal rework.

Develop a deep understanding of the business domain and the company’s technology inventory to build a solution roadmap that achieves business goals and maximizes reuse.

Collaborate with IT, data engineering and other teams to ensure that the enterprise data model incorporates the key dimensions necessary for good management: business and financial policies, security, local market regulatory rules, principles of consumer privacy by design (PII management) and all linked through fundamental foundations of identity.

Assist with planning, sourcing, data collection, profiling and transformation.

Create source to target mappings for ETL and BI developers.

Develop reusable data models based on cloud-centric, code-centric approaches to data management and cleansing.

Work with the data governance team to standardize their classification of unstructured data into standard structures for data discovery and action by business customers and stakeholders.

Supports data lineage and mapping of source system data to canonical data stores for search, analysis, and productization.

The ideal candidate should also have:

Excellent communication skills, both oral and written.

Comfortable with change, especially that which comes with business growth.

Ability to understand and translate business requirements into data and technical requirements with minimal assistance from senior team members.

Positive and flexible attitude to adapt to different needs in a constantly changing environment.

Good interpersonal skills; comfortable in managing compromises.

Foster a team culture focused on accountability, communication and self-management.

Consistently meet/exceed individual and team goals.

Eligibility:

5+ years of global technology experience, including at least 3 years of data modeling and systems architecture.

1+ years of experience with Data Lake Infrastructure, Data Warehousing and Data Analytics tools.

3+ years of experience developing enterprise data models.

Experience building solutions in the retail or supply chain space is a plus.

Expertise in data modeling tools (ER/Studio models, Erwin, IDM/ARDM).

Experience with profiling and data quality tools like Apache Griffin, Deequ and Great Expectations.

Experience building/operating highly available distributed systems for extracting, ingesting and processing large datasets.

Experience with at least one MPP database technology such as Redshift, Synapse, Teradata, or SnowFlake.

Experience with version control systems like Github and deployment and CI tools.

Experience with Azure Data Factory, Databricks and Azure Machine Learning is a plus.

Experience building solutions in the retail or supply chain space is a plus.

Experience with metadata management, data lineage and data glossaries is a plus.

Working knowledge of agile development, including DevOps and DataOps concepts.

Familiarity with business intelligence tools (such as PowerBI).

BA/BS in Computer Science, Mathematics, Physics or other technical fields.

To apply for this position, click here

Disclaimer: The recruitment information provided above is for informational purposes only. The recruitment information above is taken from the official website of the Organization. We do not provide any recruitment guarantees. Recruitment should be conducted in accordance with the official recruitment process of the company or organization that advertised the recruitment position. We do not charge any fees for providing this employment information. Neither the author nor Studycafe and its affiliates accept any responsibility for any loss or damage of any kind arising from any information contained in this article or for any actions taken in reliance thereon.