Senior Data Engineer Engineering - Atlanta, GA at Geebo

Senior Data Engineer

Interface, Inc.
is a global flooring company specializing in carbon neutral carpet tile and resilient flooring, including luxury vinyl tile (LVT) and nora rubber flooring.
We help our customers create high-performance interior spaces that support well-being, productivity, and creativity, as well as the sustainability of the planet.
Our mission, Climate Take Back(TM), invites you to join us as we commit to operating in a way that is restorative to the planet and creates a climate fit for lifeSenior Data Engineer The Senior Data Engineer will collaborate with our Business Intelligence, Infrastructure, Business Analytics, and global business and IT teams to contribute to the implementation of a modern data warehouse.
Through the development of high-performance pipelines and critical workloads, they will help enhance the foundation of our company's primary information source, enabling business applications that drive essential decision-making on a global scale.
This position will be an integral part of the Data Engineering team, playing a key role in reshaping our approach to data and its underlying infrastructure to support advanced analytics.
At Interface, we are deeply committed to the professional development of our employees.
We believe in nurturing talent, encouraging personal growth, and fostering a culture of continuous learning.
To facilitate this, we provide access to LinkedIn Learning, where you can expand your skill set, keep abreast of industry trends, and even explore new areas of interest.
Our work structure is a hybrid model, combining the best of remote and in-office work.
You'll have the flexibility to work from home, while also benefiting from in-person collaboration during three office days each week.
This balance fosters both efficiency and camaraderie, contributing to an empowering and dynamic work environment.
Essentials Functions:
Must be able to commute to Atlanta office 2-3 days per week.
Design and develop scalable data models, pipelines, and infrastructure in Azure, driving insights, reporting, mobile/web applications, and machine learning Participate in data engineering and data science projects for global Big Data initiatives Develop and automate high-volume, batch and real-time ETL pipelines using Azure Data Factory, Azure SQL Databases, Databricks, and Python Build and implement scalable cloud architecture, distributed systems using Azure Data Lake, Azure Synapse, and Snowflake Use Power BI to create impactful dashboards and data visualizations Deploy backend production services with an emphasis on high availability, robustness, and monitoring Ensure successful production deployments in an agile environment using Azure DevOps Collaborate with business and tech teams to adhere to best practices in reporting and analytics:
data integrity, test design, analysis, validation, and documentation Partner with business stakeholders across departments to form technical requirements and deliver data products that create business value Continually improve ongoing reporting and analysis processes, automating or simplifying self-service analytics Ability to work independently, with team members, and cross-functionally in a dynamic work environment Validate data pipelines to ensure high quality data is promoted to production using standard testing templates and automating testing processes in Azure DevOps Follow Continuous Integration process by committing all code to Version Control repositories Communicate progress updates to stakeholders, address technical inquiries, and investigate and resolve any issues Embrace the opportunity to learn and understand the company's commitment to sustainability Performs other duties as assigned Preferred Skills and
Experience:
Bachelor's degree in Computer Science or Engineering, Master's preferred 5
years of experience in Data Engineering, Software Engineering, Data Science, Machine Learning, and Artificial Intelligence using Snowflake, Azure or AWS cloud technologies 5
years of experience in Python programming, machine learning, artificial intelligence, system design, data structures, and algorithms in software development and high volume, distributed systems 5
years of experience in processing and modeling data in Python, SQL, Azure Synapse Analytics, AWS Redshift, Azure Data Factory, AWS Glue, Azure Databricks, AWS EMR, Apache Spark or Qlik with a strong understanding of star and snowflake schemas, OLAP/OLTP, and software engineering 5
years of experience in developing, transforming, testing and maintaining complex queries and data extracts from large, heterogeneous data sources, e.
g.
SQL Server, Salesforce, Excel, XML, JSON, flat files, and CSV files 5
years in managing enterprise level Big Data cloud applications across multi-disciplined technical and business teams while implementing Software Development methodologies in Agile Experience in building scalable, high performing and robust applications with a focus on data Ability to adapt quickly and learn new data warehousing and framework tools like Azure SQL Managed Instance, SQL Server, MongoDB, Hadoop, etc.
and develop backend queries, data models, and reports in these environments Proficient in Excel (VBA, macros, pivot tables, etc.
) Experience in using PowerShell for scripting, automation, and configuration management of cloud resources and applications Experience in developing, consuming, and testing RESTful APIs using various tools and frameworks Experience in working with JSON and YAML formats for data interchange and configuration Proficient in working with ERPs such as JDE and SAP, leveraging your knowledge to extract and manipulate data effectively for further analysis and reporting.
Back-end experience is important to navigate, understand and optimize data structures within these systems Strong analytical, problem-solving, communication, and interpersonal skills #LI-OnsiteWe are a VEVRAA Federal Contractor.
We desire priority referrals of Protected Veterans for job openings at all locations within the State of Georgia.
An Equal Opportunity Employer including Veterans and Disabled.
Recommended Skills Adaptability Agile Methodology Algorithms Amazon Redshift Analytical Apache Hadoop Estimated Salary: $20 to $28 per hour based on qualifications.

Don't Be a Victim of Fraud

  • Electronic Scams
  • Home-based jobs
  • Fake Rentals
  • Bad Buyers
  • Non-Existent Merchandise
  • Secondhand Items
  • More...

Don't Be Fooled

The fraudster will send a check to the victim who has accepted a job. The check can be for multiple reasons such as signing bonus, supplies, etc. The victim will be instructed to deposit the check and use the money for any of these reasons and then instructed to send the remaining funds to the fraudster. The check will bounce and the victim is left responsible.