Ntt Data Jobs in Usa
9,438 positions found — Page 3
LocationAtlanta, Georgia
Full/Part TimeFull-Time
Regular/TemporaryRegular
Add to Favorite JobsEmail this Job
About Us
Georgia Tech is a top-ranked public research university situated in the heart of Atlanta, a diverse and vibrant city with numerous economic and cultural strengths. The Institute serves more than 45,000 students through top-ranked undergraduate, graduate, and executive programs in engineering, computing, science, business, design, and liberal arts. Georgia Tech's faculty attracted more than $1.4 billion in research awards this past year in fields ranging from biomedical technology to artificial intelligence, energy, sustainability, semiconductors, neuroscience, and national security. Georgia Tech ranks among the nation's top 20 universities for research and development spending and No. 1 among institutions without a medical school.Georgia Tech's Mission and Values
Georgia Tech's mission is to develop leaders who advance technology and improve the human condition. The Institute has nine key values that are foundational to everything we do:
1. Students are our top priority.
2. We strive for excellence.
3. We thrive on diversity.
4. We celebrate collaboration.
5. We champion innovation.
6. We safeguard freedom of inquiry and expression.
7. We nurture the wellbeing of our community.
8. We act ethically.
9. We are responsible stewards.
Over the next decade, Georgia Tech will become an example of inclusive innovation, a leading technological research university of unmatched scale, relentlessly committed to serving the public good; breaking new ground in addressing the biggest local, national, and global challenges and opportunities of our time; making technology broadly accessible; and developing exceptional, principled leaders from all backgrounds ready to produce novel ideas and create solutions with real human impact.
Job Summary
The Manager of Data is responsible for overseeing the collection, management, and analysis of institutional data to support decision-making and strategic planning. This role involves leading a team of data analysts and ensuring data integrity, security, and compliance with relevant regulations. Additionally, the manager collaborates with various departments to develop data governance policies and implement effective data management practices that enhance the institution's ability to leverage data for improved outcomes.
Responsibilities
Job Duty 1 -
Oversee the development and implementation of data management strategies to ensure the accurate collection, storage, and retrieval of institutional data.
Job Duty 9 -
Collaborate with academic and administrative departments to identify data needs and develop solutions that enhance data accessibility and usability.
Job Duty 10 -
Perform other duties as assigned.
Job Duty 2 -
Lead a team of data analysts in conducting data analysis and reporting to support institutional decision-making and strategic initiatives.
Job Duty 3 -
Establish and enforce data governance policies to ensure data quality, integrity, and compliance with relevant regulations and standards.
Job Duty 4 -
Monitor data management systems and tools, ensuring they are maintained, updated, and aligned with best practices in data security and privacy.
Job Duty 5 -
Provide training and support to staff on data management practices, tools, and analytical techniques to foster a data-driven culture within the institution.
Job Duty 6 -
Conduct regular audits of data processes and systems to identify areas for improvement and implement corrective actions as needed.
Job Duty 7 -
Prepare and present comprehensive reports on data trends, analysis findings, and management initiatives to senior leadership and relevant stakeholders.
Job Duty 8 -
Stay informed about emerging data management technologies and methodologies to continually enhance the institution's data management capabilities.
Required Qualifications
Educational Requirements
Bachelor's degree in related discipline or equivalent, related experience.
Required Experience
5+ years of relevant experience; 3+ years of supervisory knowledge.
Preferred Qualifications
Preferred Educational Qualifications
Master's degree in related discipline or equivalent, related experience.
- Master's degree in Computer Science, Information Technology, Information System, Data Science, Business Administration, related discipline or equivalent, related experience.
- Certified Data Management Professional certification.
- Experience designing, implementing and operating Security Information Management solutions such as SIMS or ThreatSwitch.
- Advanced knowledge of SQL, database design and data modeling expertise.
- Experience in managing and securing enterprise security database systems containing sensitive and regulated data.
- Experience in cross-departmental collaboration during security investigations, assessments, and compliance reviews.
USG Core Values
The University System of Georgia is comprised of our 25 institutions of higher education and learning as well as the System Office. Our USG Statement of Core Values are Integrity, Excellence, Accountability, and Respect. These values serve as the foundation for all that we do as an organization, and each USG community member is responsible for demonstrating and upholding these standards. More details on the USG Statement of Core Values and Code of Conduct are available in USG Board Policy 8.2.18.1.2 and can be found on-line at policymanual/section8/C224/#p8.2.18_personnel_conduct.
Additionally, USG supports Freedom of Expression as stated in Board Policy 6.5 Freedom of Expression and Academic Freedom found on-line at policymanual/section6/C2653.
Equal Employment Opportunity
The Georgia Institute of Technology (Georgia Tech) is an Equal Employment Opportunity Employer. The Institute is committed to maintaining a fair and respectful environment for all. To that end, and in accordance with federal and state law, Board of Regents policy, and Institute policy, Georgia Tech provides equal opportunity to all faculty, staff, students, and all other members of the Georgia Tech community, including applicants for admission and/or employment, contractors, volunteers, and participants in institutional programs, activities, or services. Georgia Tech complies with all applicable laws and regulations governing equal opportunity in the workplace and in educational activities.
Equal opportunity and decisions based on merit are fundamental values of the University System of Georgia ("USG") and Georgia Tech. Georgia Tech prohibits discrimination, including discriminatory harassment, on the basis of an individual's race, ethnicity, ancestry, color, religion, sex (including pregnancy), national origin, age, disability, genetics, or veteran status in its programs, activities, employment, and admissions. Further, Georgia Tech prohibits citizenship status, immigration status, and national origin discrimination in hiring, firing, and recruitment, except where such restrictions are required in order to comply with law, regulation, executive order, or Attorney General directive, or where they are required by Federal, State, or local government contract.
Other Information
This is a supervisory position.This position does not have any financial responsibilities.This position will have some driving.This role is considered a position of trust.This position does not require a purchasing card (P-Card).This position will have some traveling.This position does not require security clearance or ability to obtain one.This position is located in Atlanta, GASalary range is dependent on candidates experiences and skills that ranges from $109,136 - $159,284.You must be a US citizen to be considered for this role.
Other Information
The Georgia Tech Research Institute (GTRI) is the nonprofit, applied research division of the Georgia Institute of Technology (Georgia Tech). This position is in the Research Security Department (RS) of GTRI.
Background Check
Successful candidate must be able to pass a background check. Please visit employment/pre-employment-screening
About Pinterest:
Millions of people around the world come to our platform to find creative ideas, dream about new possibilities and plan for memories that will last a lifetime. At Pinterest, we're on a mission to bring everyone the inspiration to create a life they love, and that starts with the people behind the product.
Discover a career where you ignite innovation for millions, transform passion into growth opportunities, celebrate each other's unique experiences and embrace theflexibility to do your best work. Creating a career you love? It's Possible.
At Pinterest, AI isn't just a feature, it's a powerful partner that augments our creativity and amplifies our impact, and we're looking for candidates who are excited to be a part of that. To get a complete picture of your experience and abilities, we'll explore your foundational skills and how you collaborate with AI.
Through our interview process, what matters most is that you can always explain your approach, showing us not just what you know, but how you think. You can read more about our AI interview philosophy and how we use AI in our recruiting process here.
About tvScientific
tvScientific is the first and only CTV advertising platform purpose-built for performance marketers. We leverage massive data and cutting-edge science to automate and optimize TV advertising to drive business outcomes. Our solution combines media buying, optimization, measurement, and attribution in one, efficient platform. Our platform is built by industry leaders with a long history in programmatic advertising, digital media, and ad verification who have now purpose-built a CTV performance platform advertisers can trust to grow their business.
As aStaff Data Engineer at tvScientific, you will be a key player in implementing the robust data infrastructure to power our data-heavy company. You will collaborate with our cross-functional teams to evolve our core data pipelines, design for efficiency as we scale, and store data in optimal engines and formats.This is anindividual contributor role, where you will work to define and implement a strategic vision for data engineering within the organization.
What you'll do:
- Design and implement robust data infrastructure in AWS, using Spark with Scala
- Evolve our core data pipelines to efficiently scale for our massive growth
- Store data in optimal engines and formats, matching your designs to our performance needs and cost factors
- Collaborate with our cross-functional teams to design data solutions that meet business needs
- Design and implement knowledge graphs, exposing their functionality both via Batch Processing and APIs
- Leverage and optimize AWS resources while designing for scale
- Collaborate closely with our Data Science and Product teams
- How we'll define success:
- Successful design and implementation of scalable and efficient data infrastructure
- Timely delivery and optimization of data assets and APIs
- High attention to detail in implementation of automated data quality checks
- Effective collaboration with cross-functional teams
What we're looking for:
- Production data engineering experience
- Proficiency in Spark and Scala, with proven experience building data infrastructure in Spark using Scala
- Experience in delivering significant technical initiatives and building reliable, large scale services
- Experience in delivering APIs backed by relationship-heavy datasets
- Familiarity with data lakes, cloud warehouses, and storage formats
- Strong proficiency in AWS services
- Expertise in SQL for data manipulation and extraction
- Excellent written and verbal communication skills
- Bachelor's degree in Computer Science or a related field
- Nice-to-haves:
- Experience in adtech
- Experience implementing data governance practices, including data quality, metadata management, and access controls
- Strong understanding of privacy-by-design principles and handling of sensitive or regulated data
- Familiarity with data table formats like Apache Iceberg, Delta
- Previous experience building out a Data Engineering function
- Proven experience working closely with Data Science teams on machine learning pipelines
In-Office Requirement Statement:
- We recognize that the ideal environment for work is situational and may differ across departments. What this looks like day-to-day can vary based on the needs of each organization or role.
Relocation Statement:
- This position is not eligible for relocation assistance. Visit ourPinFlexpage to learn more about our working model.
#LI-SM4
#LI-REMOTE
At Pinterest we believe the workplace should be equitable, inclusive, and inspiring for every employee. In an effort to provide greater transparency, we are sharing the base salary range for this position. The position is also eligible for equity. Final salary is based on a number of factors including location, travel, relevant prior experience, or particular skills and expertise.
Information regarding the culture at Pinterest and benefits available for this position can be found here.
US based applicants only$155,584—$320,320 USDOur Commitment to Inclusion:
Pinterest is an equal opportunity employer and makes employment decisions on the basis of merit. We want to have the best qualified people in every job. All qualified applicants will receive consideration for employment without regard to race, color, ancestry, national origin, religion or religious creed, sex (including pregnancy, childbirth, or related medical conditions), sexual orientation, gender, gender identity, gender expression, age, marital status, status as a protected veteran, physical or mental disability, medical condition, genetic information or characteristics (or those of a family member) or any other consideration made unlawful by applicable federal, state or local laws. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. If you require a medical or religious accommodation during the job application process, please completethis formfor support.
Position title:
Associate Librarian-Temporary
Salary range:
The UC academic salary scales set the minimum pay determined by rank and salary point at appointment. See the following table(s) for the current salary scale(s) for this position: . A reasonable estimate for this position is $80,349-$102,121.
Percent time:
100%
Anticipated start:
As early as January 2025. Start date is flexible.
Position duration:
This is a two-year, temporary full-time appointment with the possibility of renewal for one additional year based on funding and performance.
Application Window
Open date: October 14, 2025
Most recent review date: Friday, Nov 14, 2025 at 11:59pm (Pacific Time)
Applications received after this date will be reviewed by the search committee if the position has not yet been filled.
Final date: Tuesday, Mar 31, 2026 at 11:59pm (Pacific Time)
Applications will continue to be accepted until this date, but those received after the review date will only be considered if the position has not yet been filled.
Position description
The Environment
The UC Berkeley Library is an internationally renowned research and teaching facility at one of the nation's premier public universities. A highly diverse and intellectually rich environment, Berkeley serves a campus community of 33,070 undergraduate students, 12,812 graduate students, and 1,525 faculty members. The Library comprises 20 campus libraries, including the Doe Library, the Moffitt Library, the Bancroft Library, the C.V. Starr East Asian Library, and numerous subject specialty libraries. With a collection of more than 12 million volumes and a collections budget of over $15 million, the Library offers extensive resources in all formats and robust services to connect users with the collections and build their research skills.
The Library Data Services Program (LDSP) guides scholars to discover, access, share, and preserve data through dataset acquisition, discovery, and librarian-led instruction and consultations. LDSP works with UC Berkeley librarians and library staff to provide internal professional development and training opportunities to enhance our data management skills and provide data services for our users. LDSP provides data services to all disciplines at UC Berkeley through collaboration with librarians and library staff in the Library's divisions including Arts and Humanities, Instruction Services, Sciences, and Social Sciences.
Responsibilities
The Data Instruction and Outreach Librarian will work in partnership with and under the guidance of the Data Services Librarian in the Library Data Services Program. Librarians and staff in the Library Data Services Program collaborate closely with subject librarians and library staff to provide data support for researchers, faculty, and students who are using data in research, teaching, and learning. This librarian will actively participate in the Library's instruction services-providing consultations, teaching workshops, and designing instructional content.
The Data Instruction and Outreach Librarian will develop and maintain a scaffolded approach to data instruction that includes: supporting novice users, including undergraduate students, who may be unaccustomed to working with data; providing instruction and outreach for the library's licensed datasets and platforms; and collaborating with subject and instruction librarians to teach data literacy, ethics, analysis, and visualization. This position also works to integrate data literacy and data science pedagogical practices into the library's instructional portfolio, with a special focus on undergraduates. Since using AI in data analysis workflows is becoming more common, the librarian will help faculty, researchers, and students use licensed AI tools in conjunction with ethical and legal use of library resources, as well as within the broader University context.
The librarian will engage with the Research Data Management Program. This unique program bridges the UC Berkeley Library and Research IT to conduct outreach related to data management, storage, and sharing. UC Berkeley, through entities like the Library, the College of Computing, Data Science, and Society, the D-Lab, the Berkeley Institute for Data Science, and the Berkeley Initiative for Transparency in the Social Sciences, actively supports the development of research and classroom environments that champion transparency and reproducibility. The successful incumbent will collaborate with these campus partners and librarians to facilitate open research practices and workflows where possible.
UC Berkeley librarians are expected to participate in library-wide planning and governance and work effectively in a shared decision-making environment. Advancement is partially based upon professional contributions beyond the primary assignment; the successful candidate will show evidence or promise of such contributions to the Library, campus, UC System, and profession.
The UC Berkeley Library is committed to supporting and encouraging respect and empathy, and nurturing a culture where all employees thrive. The Library seeks candidates who recognize and appreciate one another's contributions, expertise, and accomplishments and will strive to provide equitable access to a diverse set of collections and services. For more information, please see the UC Berkeley Library Statement of Values.
UC professional librarians are academic appointees and are represented by an exclusive bargaining agent, the University Council - American Federation of Teachers (UC-AFT). This position is in the bargaining unit. Librarians are entitled to appropriate professional development leave, vacation leave, sick leave, and all other benefits granted to non-faculty academic personnel. The University has an excellent retirement system and sponsors a variety of group health, dental, vision, and life insurance plans in addition to other benefits. This is an externally funded appointment.
This position is eligible for some remote work. Exact arrangements are determined in partnership with your supervisor to meet role responsibilities and department needs, and are subject to change.
UC Berkeley Library Website:
UC Berkeley Library statement of values: about/library-values
Qualifications
Basic qualifications (required at time of application)
Advanced degree or enrolled in an advanced degree program.
Additional qualifications (required at time of start)
- Advanced degree.
- Two years of experience providing reference and instructional services in an academic or professional setting.
Preferred qualifications
- Master's degree from an American Library Association (ALA) accredited institution program or equivalent degree.
- Demonstrated commitment to the Library's values.
- Demonstrated analytical, presentation, and communication skills.
- Demonstrated ability to provide effective instruction and training related to digital literacy, artificial intelligence, technology skills, and/or research data management.
- Experience working with some languages, platforms, and environments to support interactive, computational research, such as Python, R, Jupyter, GitHub, and/or Unix Shell.
- Participation in The Carpentries or other data science education program.
- Demonstrated experience working effectively with all staff in a highly collaborative, matrixed environment.
- Experience working collaboratively with multiple stakeholders in an academic environment.
- Experience accessing, creating, analyzing, or manipulating qualitative and/or quantitative data for academic research purposes. This may include experience with datasets in the sciences (e.g. astronomy, ecology, genomics) or social sciences (e.g., government, financial, survey).
- Demonstrated knowledge and application of data ethics including awareness of key topics related to artificial intelligence, data privacy, security, and bias as well as legal uses of licensed and open data.
Application Requirements
Document requirements
Curriculum Vitae - Your most recently updated C.V.
Cover Letter
Reference requirements
- 3-5 required (contact information only)
Apply link:
JPF05126
Help contact:
About UC Berkeley
UC Berkeley is committed to diversity, equity, inclusion, and belonging in our public mission of research, teaching, and service, consistent with UC Regents Policy 4400 and University of California Academic Personnel policy (APM 210 1-d). These values are embedded in our Principles of Community, which reflect our passion for critical inquiry, debate, discovery and innovation, and our deep commitment to contributing to a better world. Every member of the UC Berkeley community has a role in sustaining a safe, caring and humane environment in which these values can thrive.
The University of California, Berkeley is an Equal Opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, age, or protected veteran status.
For more information, please refer to the University of California's Affirmative Action and Nondiscrimination in Employment Policy and the University of California's Anti-Discrimination Policy.
In searches when letters of reference are required all letters will be treated as confidential per University of California policy and California state law. Please refer potential referees, including when letters are provided via a third party (i.e., dossier service or career center), to the UC Berkeley statement of confidentiality prior to submitting their letter.
As a University employee, you will be required to comply with all applicable University policies and/or collective bargaining agreements, as may be amended from time to time. Federal, state, or local government directives may impose additional requirements.
Unless stated otherwise, unambiguously, in the position description, this position does not include sponsorship of a new consular H-1B visa petition that would require payment of the $100,000 supplemental fee.
As a condition of employment, the finalist will be required to disclose if they are subject to any final administrative or judicial decisions within the last seven years determining that they committed any misconduct.
- "Misconduct" means any violation of the policies or laws governing conduct at the applicant's previous place of employment, including, but not limited to, violations of policies or laws prohibiting sexual harassment, sexual assault, or other forms of harassment or discrimination, as defined by the employer.
- UC Sexual Violence and Sexual Harassment Policy
- UC Anti-Discrimination Policy
- APM - 035: Affirmative Action and Nondiscrimination in Employment
Job location
Berkeley, CA
LocationAtlanta, Georgia
Full/Part TimeFull-Time
Regular/TemporaryRegular
Add to Favorite JobsEmail this Job
About Us
Overview
Georgia Tech prides itself on its technological resources, collaborations, high-quality student body, and its commitment to building an outstanding and diverse community of learning, discovery, and creation. We strongly encourage applicants whose values align with our institutional values, as outlined in our Strategic Plan. These values include academic excellence, diversity of thought and experience, inquiry and innovation, collaboration and community, and ethical behavior and stewardship. Georgia Tech has policies to promote a healthy work-life balance and is aware that attracting faculty may require meeting the needs of two careers.
About Georgia Tech
Georgia Tech is a top-ranked public research university situated in the heart of Atlanta, a diverse and vibrant city with numerous economic and cultural strengths. The Institute serves more than 45,000 students through top-ranked undergraduate, graduate, and executive programs in engineering, computing, science, business, design, and liberal arts. Georgia Tech's faculty attracted more than $1.4 billion in research awards this past year in fields ranging from biomedical technology to artificial intelligence, energy, sustainability, semiconductors, neuroscience, and national security. Georgia Tech ranks among the nation's top 20 universities for research and development spending and No. 1 among institutions without a medical school.
Georgia Tech's Mission and Values
Georgia Tech's mission is to develop leaders who advance technology and improve the human condition. The Institute has nine key values that are foundational to everything we do:
1. Students are our top priority.
2. We strive for excellence.
3. We thrive on diversity.
4. We celebrate collaboration.
5. We champion innovation.
6. We safeguard freedom of inquiry and expression.
7. We nurture the wellbeing of our community.
8. We act ethically.
9. We are responsible stewards.
Over the next decade, Georgia Tech will become an example of inclusive innovation, a leading technological research university of unmatched scale, relentlessly committed to serving the public good; breaking new ground in addressing the biggest local, national, and global challenges and opportunities of our time; making technology broadly accessible; and developing exceptional, principled leaders from all backgrounds ready to produce novel ideas and create solutions with real human impact.
Department Information
The Office of Institutional Research and Planning (IRP) at Georgia Tech is a research and analytics service unit dedicated to supporting the campus community. Our team of institutional research and data analytics professionals combines technical and creative skills to inform institutional strategic decision-making, planning, and research across campus. In addition to institutional reporting and compliance, IRP provides data education, support, and resources to all campus units.
Visit our website to learn more about what we do:
Job Summary
Data Analysts analyze data, interpret trends and patterns, and provide insights to support decision-making processes. They develop data models, perform data mining and statistical analysis, and collaborate with stakeholders to optimize data-driven strategies.
Responsibilities
Job Duty 1 -
Collect, analyze, and interpret data from various sources, databases, and systems to extract insights, trends, and patterns that inform business decisions, strategies, and operations.
Job Duty 2 -
Develop and maintain data models, queries, and reports using SQL, Python, R, or data analysis tools to perform data cleansing, transformation, and visualization tasks.
Job Duty 3 -
Identify data quality issues, anomalies, and discrepancies in datasets, conduct data validation, data profiling, and data integrity checks to ensure data accuracy and reliability.
Job Duty 4 -
Create data visualizations, dashboards, and data analytics reports to communicate data findings, trends, and key metrics to stakeholders, management, and decision-makers.
Job Duty 5 -
Conduct ad-hoc data analysis, exploratory data analysis, and statistical analysis to support decision-making processes, performance monitoring, and data-driven insights.
Job Duty 6 -
Perform data mining, predictive analytics, and machine learning tasks to uncover hidden patterns, predict outcomes, and drive data-driven decision-making in organizations.
Job Duty 7 -
Utilize data analytics tools, business intelligence platforms, and statistical software packages to conduct data analysis, data modeling, and data visualization tasks efficiently and accurately.
Job Duty 8 -
Stay current on data analytics trends, tools, and methodologies through training, certifications, and industry publications to enhance data analysis skills and knowledge.
Job Duty 9 -
Collaborate with business users, data scientists, and Information Technology teams to define data requirements, analytics requirements, and data-driven solutions for business problems and opportunities.
Job Duty 10 -
Perform other job-related duties as assigned.
Responsibilities
The Institutional Research Data Analyst will also be expected to perform various duties specific to institutional research, including but not limited to:
- Responding to intermediate to high difficulty/complexity ad-hoc data and analysis requests
- Adhering to federal, state, and institutional policies, regulations, and requirements related to data security, privacy, and governance
Completing or supporting the completion of externally-driven compliance and data-related reporting including
- Federal, e.g., IPEDS, NSF-HERD, NSF-GSS, etc.
- State, e.g., USG data collections, data requests, etc.
- Higher education organizations, e.g., AAUDE, SREB, NSC, accrediting bodies, etc.
Required Qualifications
Educational Requirements
Bachelor's Degree in related discipline or equivalent combination of education and experience. Advanced certification may be preferred or required (some profiles may require additional education).
Required Experience
Four or more years of relevant experience.
Proposed Salary
Annual Salary Range: $75,751 to $80,000
Knowledge, Skills, & Abilities
SKILLS
o Performs all the standard and technical aspects of the job
o Applies in-depth professional, technical, or industry knowledge to manage significantly complex
assignments/projects/programs
o Advanced knowledge of principles and practices of a particular field of specialization and Institute
policies, practices, and procedures
USG Core Values
The University System of Georgia is comprised of our 25 institutions of higher education and learning as well as the System Office. Our USG Statement of Core Values are Integrity, Excellence, Accountability, and Respect. These values serve as the foundation for all that we do as an organization, and each USG community member is responsible for demonstrating and upholding these standards. More details on the USG Statement of Core Values and Code of Conduct are available in USG Board Policy 8.2.18.1.2 and can be found on-line at policymanual/section8/C224/#p8.2.18_personnel_conduct.
Additionally, USG supports Freedom of Expression as stated in Board Policy 6.5 Freedom of Expression and Academic Freedom found on-line at policymanual/section6/C2653.
Equal Employment Opportunity
The Georgia Institute of Technology (Georgia Tech) is an Equal Employment Opportunity Employer. The Institute is committed to maintaining a fair and respectful environment for all. To that end, and in accordance with federal and state law, Board of Regents policy, and Institute policy, Georgia Tech provides equal opportunity to all faculty, staff, students, and all other members of the Georgia Tech community, including applicants for admission and/or employment, contractors, volunteers, and participants in institutional programs, activities, or services. Georgia Tech complies with all applicable laws and regulations governing equal opportunity in the workplace and in educational activities.
Equal opportunity and decisions based on merit are fundamental values of the University System of Georgia ("USG") and Georgia Tech. Georgia Tech prohibits discrimination, including discriminatory harassment, on the basis of an individual's race, ethnicity, ancestry, color, religion, sex (including pregnancy), national origin, age, disability, genetics, or veteran status in its programs, activities, employment, and admissions. Further, Georgia Tech prohibits citizenship status, immigration status, and national origin discrimination in hiring, firing, and recruitment, except where such restrictions are required in order to comply with law, regulation, executive order, or Attorney General directive, or where they are required by Federal, State, or local government contract.
Other Information
This is not a supervisory position.
This position does not have any financial responsibilities.
This position will not be required to drive.
This role is not considered a position of trust.
This position does not require a purchasing card (P-Card).
This position will not travel
This position does not require security clearance.
Background Check
Successful candidate must be able to pass a background check. Please visit employment/pre-employment-screening
Company Description
Press Ganey is the leading experience measurement, data analytics, and insights provider for complex industries-a status we earned over decades of deep partnership with clients to help them understand and meet the needs of their key stakeholders. Our earliest roots are in U.S. healthcare -perhaps the most complex of all industries. Today we serve clients around the globe in every industry to help them improve the Human Experiences at the heart of their business. We serve our clients through an unparalleled offering that combines technology, data, and expertise to enable them to pinpoint and prioritize opportunities, accelerate improvement efforts and build lifetime loyalty among their customers and employees.
Like all great companies, our success is a function of our people and our culture. Our employees have world-class talent, a collaborative work ethic, and a passion for the work that have earned us trusted advisor status among the world's most recognized brands. As a member of the team, you will help us create value for our clients, you will make us better through your contribution to the work and your voice in the process. Ours is a path of learning and continuous improvement; team efforts chart the course for corporate success.
Our Mission:
We empower organizations to deliver the best experiences. With industry expertise and technology, we turn data into insights that drive innovation and action.
Our Values:
To put Human Experience at the heart of organizations so every person can be seen and understood.
Energize the customer relationship:Our clients are our partners. We make their goals our own, working side by side to turn challenges into solutions.
Success starts with me:Personal ownership fuels collective success. We each play our part and empower our teammates to do the same.
Commit to learning:Every win is a springboard. Every hurdle is a lesson. We use each experience as an opportunity to grow.
Dare to innovate:We challenge the status quo with creativity and innovation as our true north.
Better together:We check our egos at the door. We work together, so we win together.
We are seeking an experienced Staff Data Engineer to join our Unified Data Platform team. The ideal candidate will design, develop, and maintain enterprise-scale data infrastructure leveraging Azure and Databricks technologies. This role involves building robust data pipelines, optimizing data workflows, and ensuring data quality and governance across the platform. You will collaborate closely with analytics, data science, and business teams to enable data-driven decision-making.
Duties & Responsibilities:
- Design, build, and optimizedata pipelinesand workflows inAzureandDatabricks, including Data Lake and SQL Database integrations.
- Implement scalableETL/ELT frameworksusingAzure Data Factory,Databricks, andSpark.
- Optimize data structures and queries for performance, reliability, and cost efficiency.
- Drivedata quality and governance initiatives, including metadata management and validation frameworks.
- Collaborate with cross-functional teams to define and implementdata modelsaligned with business and analytical requirements.
- Maintain clear documentation and enforce engineering best practices for reproducibility and maintainability.
- Ensure adherence tosecurity, compliance, and data privacystandards.
- Mentor junior engineers and contribute to establishingengineering best practices.
- SupportCI/CD pipeline developmentfor data workflows using GitLab or Azure DevOps.
- Partner with data consumers to publish curated datasets into reporting tools such asPower BI.
- Stay current with advancements inAzure, Databricks, Delta Lake, and data architecture trends.
Technical Skills:
- Advanced proficiency inAzure 5+ years(Data Lake, ADF, SQL).
- Strong expertise inDatabricks (5+ years),Apache Spark (5+ years), andDelta Lake (5+ years).
- Proficient inSQL (10+ years)andPython (5+ years); familiarity withScalais a plus.
- Strong understanding ofdata modeling,data governance, andmetadata management.
- Knowledge ofsource control (Git),CI/CD, and modern DevOps practices.
- Familiarity withPower BIvisualization tool.
Minimum Qualifications:
- Bachelor's or Master's degree in Computer Science, Data Science, or related field.
- 7+ yearsof experience in data engineering, with significant hands-on work incloud-based data platforms (Azure).
- Experience buildingreal-time data pipelinesand streaming frameworks.
- Strong analytical and problem-solving skills.
- Proven ability tolead projectsand mentor engineers.
- Excellent communication and collaboration skills.
Preferred Qualifications:
- Master's degree in Computer Science, Engineering, or a related field.
- Exposure tomachine learning integrationwithin data engineering pipelines.
Don't meet every single requirement?Studies have shown that women and people of color are less likely to apply to jobs unless they meet every single qualification. At Press Ganey we are dedicated to building a diverse, inclusive and authentic workplace, so if you're excited about this role but your past experience doesn't align perfectly with every qualification in the job description, we encourage you to apply anyways. You may be just the right candidate for this or other roles.
Additional Information for US based jobs:
Press Ganey Associates LLC is an Equal Employment Opportunity/Affirmative Action employer and well committed to a diverse workforce. We do not discriminate against any employee or applicant for employment because of race, color, sex, age, national origin, religion, sexual orientation, gender identity, veteran status, and basis of disability or any other federal, state, or local protected class.
Pay Transparency Non-Discrimination Notice - Press Ganey will not discharge or in any other manner discriminate against employees or applicants because they have inquired about, discussed, or disclosed their own pay or the pay of another employee or applicant. However, employees who have access to the compensation information of other employees or applicants as a part of their essential job functions cannot disclose the pay of other employees or applicants to individuals who do not otherwise have access to compensation information, unless the disclosure is (a) in response to a formal complaint or charge, (b) in furtherance of an investigation, proceeding, hearing, or action, including an investigation conducted by the employer, or (c) consistent with the contractor's legal duty to furnish information.
The expected base salary for this position ranges from $110,000 to $170,000. It is not typical for offers to be made at or near the top of the range. Salary offers are based on a wide range of factors including relevant skills, training, experience, education, and, where applicable, licensure or certifications obtained. Market and organizational factors are also considered. In addition to base salary and a competitive benefits package, successful candidates are eligible to receive a discretionary bonus or commission tied to achieved results.
All your information will be kept confidential according to EEO guidelines.
Our privacy policy can be found here:legal-privacy/
Visa Status: US Citizen or Green Card Only
Location: Irving, TX (Local Candidates Only)
Employment Type: Full-time / Direct Hire
Work Environment: Hybrid (Monday thru Thursday - in office / Friday - at home)
***MUST HAVE 10+ YEARS EXPERIENCE AS A DATA ENGINEER***
***US Citizen or Green Card Only***
The AWS Senior Data Engineer will own the planning, design, and implementation of data structures for this leading Hospitality Corporation in their AWS environment. This role will be responsible for incorporating all internal and external data sources into a robust, scalable, and comprehensive data model within AWS to support business intelligence and analytics needs throughout the company.
Responsibilities:
- Collaborate with cross-functional teams to understand and define business intelligence needs and translate them into data modeling solutions
- Develops, builds and maintains scalable data pipelines, data schema design, and dimensional data modelling in Databricks and AWS for all system data sources, API integrations, and bespoke data ingestion files from external sources. Includes Batch and real-time pipelines.
- Responsible for data cleansing, standardization, and quality control
- Create data models that will support comprehensive data insights, business intelligence tools, and other data science initiatives
- Create data models and ETL procedures with traceability, data lineage and source control
- Design and implement data integration and data quality framework
- Implement data monitoring best practices with trigger based alerts for data processing KPIs and anomalies
- Investigate and remediate data problems, performing and documenting thorough and complete root cause analyses. Make recommendation for mitigation and prevention of future issues.
- Work with Business and IT to assess efficacy of all legacy data sources, making recommendations for migration, anonymization, archival and/or destruction.
- Continually seek to optimize performance through database indexing, query optimization, stored procedures, etc.
- Ensure compliance with data governance and data security requirements, including data life cycle management, purge and traceability.
- Create and manage documentation and change control mechanisms for all technical design, implementations and systems maintenance.
Target Skills and Experience
- Bachelor's or graduate degree in computer science, information systems or related field preferred, or similar combination of education and experience
- At least 10 years' experience designing and managing data pipelines, schema modeling, and data processing systems.
- Experience with Databricks a plus (or similar tools like Microsoft Fabric, Snowflake, etc.) to drive scalable data solutions.
- Experience with SAP a plus
- Proficient in Python, with a track record of solving real-world data challenges.
- Advanced SQL skills, including experience with database design, query optimization, and stored procedures.
- Experience with Terraform or other infrastructure-as-code tools is a plus.
Overview
We are seeking a seasoned Analytics leader to build and lead our enterprise Analytics and Data Governance function in a modern group purchasing / procurement environment. This leader will turn our rich ecosystem of member, supplier, contract, and transaction data into a strategic asset that drives savings, compliance, growth, and differentiated insight for our members and suppliers.
This leader will also own the data governance operating model, enterprise metrics, and analytics roadmap that power member-facing insights, internal performance management, and AI use cases across the technology platform (Website, B2B eCommerce, supplier portal, sourcing tools, and partner integrations).
Key responsibilities
Data governance and policy
- Define and run the enterprise data governance framework covering member, supplier, contract, item, and transaction data domains.
- Establish data ownership and stewardship across functions (Category Management, Supplier Management, Finance, Sales, Marketing, Digital) driving clear accountabilities for data quality and definitions.
- Implement policies for responsible use of data in supplier programs, member reporting, and AI/ML models, ensuring compliance with contractual, regulatory, and privacy requirements.
- Drive data quality management (profiling, remediation, SLAs) for critical assets such as contract price files, item catalogs, rebate/accrual data, and member hierarchies.
- Oversee metadata, business glossary, and data lineage so teams can confidently understand "one source of truth" for core GPO metrics (e.g., committed vs. actual spend, penetration, compliance, savings delivered).
Analytics strategy and delivery
- Define the enterprise analytics vision and roadmap aligned to procurement value levers: spend visibility, category performance, contract compliance, leakage detection, rebate optimization, and supplier performance.
- Lead the design and delivery of standardized KPI suites and dashboards for executives, category teams, supplier partners, and member account teams (e.g., savings scorecards, compliance heatmaps, portfolio optimization).
- Partner with Product and Engineering to ensure the data platform (warehouse, semantic layer, BI tools) can support self-service analytics, embedded insights in member/supplier portals, and AI-driven use cases.
- Champion enterprise metrics and advanced analytics capabilities such as, forecasting, benchmarking, opportunity sizing, and integrity analytics, ensuring models are traceable, governed, and auditable.
- Translate business needs into clear data products (curated data sets, subject-area marts, APIs) that serve both internal teams and external-facing solutions.
Stakeholder leadership and collaboration
- Serve as the enterprise "single point of accountability" for data and analytics, aligning priorities across Technology, Category Management, Supplier Relations, Sales, Finance, and Operations.
- Partner with Supplier and Member-facing teams to co-create analytics offerings that differentiate the GPO (e.g., supplier growth playbooks, member CFO dashboards, public-sector transparency packs).
- Educate executives and business leaders on data literacy, standard metrics, and how to use insights in planning, negotiations, and supplier programs.
- Collaborate closely with Security, Legal, and Compliance to ensure that member and supplier data is used ethically and in line with contracts and regulations.
Team building and operations
- Build and lead a high-performing team of data analysts, analytics engineers, data governance managers, and data stewards.
- Define operating rhythms (data council, data domain forums, metric review cadences) that keep governance and analytics tightly connected to business outcomes.
- Establish and track KPIs for the data function itself (data quality scores, adoption of governed datasets, BI usage, time-to-insight).
- Select and manage key tools and vendors in the analytics and governance ecosystem (warehouse, BI, catalog/governance, quality monitoring).
Qualifications
- Bachelor's or Master's degree in Data/Computer Science, Information Systems, Analytics, Statistics, Business, or related field.
- 10+ years of experience in analytics, data governance, or enterprise data management, including 3–5+ years leading teams.
- Proven experience in a procurement, supply chain, GPO, distribution, or B2B marketplace environment strongly preferred.
- Demonstrated success implementing data governance frameworks and delivering analytics that directly influenced commercial or procurement outcomes (e.g., savings, compliance, supplier growth).
- Hands-on familiarity with modern data platforms (e.g., Snowflake/BigQuery/Redshift, dbt, Power BI/Tableau/Looker, and one or more data catalog/governance tools).
- Strong grasp of regulatory / contractual considerations relevant to member and supplier data (data sharing agreements, use of benchmarking, privacy/security standards).
- Excellent leadership, storytelling, and stakeholder management skills; able to influence at C-suite and board levels.
Attributes for success
- Business-first mindset: instinctively ties data work to member value, supplier value, and financial impact.
- Pragmatic operator: balances governance rigor with speed, enabling innovation rather than blocking it.
- Skilled translator: can convert complex data and AI topics into clear narratives for executives, sales, and category leaders.
- Culture builder: passionate about creating a data-driven culture that values standard definitions, trusted data, and measurable outcomes.
Compensation:
$150,000 to $200,000 per year annual salary.
Exact compensation may vary based on several factors, including skills, experience, and education.
Benefit packages for this role include: Benefit packages for this role may include healthcare insurance offerings and paid leave as provided by applicable law.
Job Title: Senior Data Engineer
Location: Chicago, IL (Hybrid)
Department: Data & Analytics
Reports To: Head of Data Engineering / Data Platform Lead
Role Overview
We are seeking a highly skilled Senior Data Engineer with strong Python development expertise and deep experience in Snowflake to design, build, and optimize scalable enterprise data solutions. This role is based in Chicago, IL and will support regulatory and risk data initiatives in a highly governed environment.
The ideal candidate has hands-on experience building modern cloud data platforms and is familiar with risk management frameworks, BCBS 239 principles, and Governance, Risk & Compliance (GRC) requirements within financial services.
Key Responsibilities
Data Engineering & Architecture
Design, develop, and maintain scalable data pipelines using Python.
Build and optimize data models, transformations, and data marts within Snowflake.
Develop robust ELT/ETL frameworks for structured and semi-structured data.
Optimize Snowflake performance, cost efficiency, clustering, and workload management.
Implement automation, monitoring, and CI/CD for data pipelines.
Risk & Regulatory Data Management
Support regulatory reporting aligned with BCBS 239 (risk data aggregation and reporting).
Ensure data traceability, lineage, reconciliation, and auditability.
Implement controls aligned with Governance, Risk & Compliance (GRC) frameworks.
Partner with Risk, Finance, Compliance, and Audit teams to deliver accurate and governed data assets.
Data Governance & Quality
Develop and enforce data quality validation frameworks.
Maintain metadata, lineage documentation, and data catalog integration.
Implement data access controls and security best practices.
Technical Leadership
Provide mentorship and code reviews for data engineering team members.
Promote engineering best practices and documentation standards.
Collaborate cross-functionally with architects, analysts, and business stakeholders.
Required Qualifications
7+ years of experience in Data Engineering or Data Platform development.
Strong Python programming expertise (Pandas, PySpark, Airflow, etc.).
Hands-on experience with Snowflake (data modeling, Snowpipe, Streams & Tasks, performance tuning).
Advanced SQL skills and deep understanding of data warehousing concepts.
Experience supporting BCBS 239 compliance or similar regulatory reporting frameworks.
Experience working within Governance, Risk & Compliance (GRC) structures.
Experience in cloud environments (AWS, Azure, or GCP).
Strong understanding of data lineage, controls, reconciliation, and audit requirements.
Preferred Qualifications
Experience in banking, capital markets, or financial services.
Knowledge of credit risk, market risk, liquidity risk, or regulatory reporting domains.
Experience with data governance tools (Collibra, Alation, etc.).
Familiarity with DevOps practices, Docker, Kubernetes.
Experience building enterprise data platforms in highly regulated environments.
Key Competencies
Strong problem-solving and analytical thinking.
Ability to operate in a regulated, audit-driven environment.
Excellent communication and stakeholder management skills.
Detail-oriented with a focus on data accuracy and integrity.
Leadership mindset with hands-on technical capability.
Must be local to TX
Skills:
Delivery manager
2026 road map
To deliver roadmap, interact with business, explain value prop, understand their rules, standard rules
Manage timelines
Partner with segments
Before and after Data Quality scores
Technical
Articulate technical design and solutions
Capabilities of Collibra, Soda
How to use those tools
Proactive communication skills
12+ years kind of role Technical Project Manager with solutioning and problem skills
Role Summary
The Data Governance Lead will design, build, and scale an enterprise data governance program from the ground up, using Collibra as the core platform for a large real estate enterprise. This senior role combines strategic leadership, hands‐on Collibra configuration, stakeholder management, and deep domain knowledge of real estate data. The incumbent will own the governance vision, operating model, and tooling, and will partner with business, IT, data engineering, analytics, legal, and compliance teams.
Key Responsibilities
1. Data Governance Strategy and Operating Model
- Define and implement the enterprise data governance strategy, roadmap, and operating model aligned to business objectives.
- Define governance KPIs, maturity metrics, and success measures.
- Drive adoption through change management, communications, and training.
2. Collibra Implementation from Scratch
- Lead end‐to‐end Collibra implementation: platform setup, environment planning (Dev/Test/Prod), domain modeling, and taxonomy design.
- Customize asset models for real estate use cases.
- Configure and manage Business Glossary, Data Dictionary, Data Catalog, and Reference Data & Code Sets.
- Design and implement Collibra workflows for glossary lifecycle, owner/steward assignment, issue management, and escalation.
- Implement Collibra operating model with defined roles (Data Owner, Data Steward, Custodian, Consumer) and RACI mappings.
- Integrate Collibra with data warehouses/lakes (Snowflake, BigQuery, Azure), BI tools (Power BI, Tableau), and ETL/ELT tools (Informatica, dbt, ADF).
- Lead metadata ingestion across technical, operational, and business metadata.
3. Data Ownership, Stewardship, and Accountability
- Define and institutionalize data ownership and stewardship across business units.
- Partner with business leaders to assign Data Owners and Stewards.
- Drive accountability for data definitions, data quality, and metadata completeness.
- Establish Data Governance Councils and working groups.
4. Data Quality and Issue Management
- Collaborate with data quality teams to define Critical Data Elements (CDEs) and align rules and thresholds.
- Configure Collibra issue management workflows and ensure traceability from issues to root causes and remediation actions.
- Provide governance oversight for remediation and continuous improvement.
5. Compliance, Risk, and Security Governance
- Define governance controls for regulatory compliance, contractual data, and financial reporting.
- Partner with Legal, Risk, and Security to classify sensitive data and apply access and usage policies.
- Implement data classification and privacy metadata within Collibra.
6. Stakeholder and Program Leadership
- Serve as the single point of accountability for the data governance program.
- Present progress, metrics, and risks to senior leadership.
- Mentor governance analysts, stewards, and platform administrators.
- Coordinate with system integrators and vendors as required.
Required Skills and Qualifications
Mandatory
- 12–18+ years in data management, data governance, or analytics leadership.
- Deep hands‐on experience implementing Collibra from scratch at enterprise scale.
- Strong expertise in business glossary and metadata management, stewardship models, and workflow automation in Collibra.
- Proven track record driving enterprise adoption of governance platforms.
- Excellent stakeholder management and communication skills.
Preferred
- Experience in real estate, property management, construction, facilities, or capital projects.
- Familiarity with DAMA‐DMBOK, DCAM, or similar governance frameworks.
- Exposure to data quality tools such as SODA, Great Expectations, or Informatica DQ.
- Experience integrating Collibra with cloud data platforms.
- Prior experience leading governance programs in large, federated organizations.
- Collibra certification is a plus.
Behavioral and Leadership Attributes
- Strategic thinker with strong execution capability.
- Balances business pragmatism with governance rigor.
- Influences without formal authority and drives change.
- Excellent storytelling and change management skills.
- Hands‐on leader who can configure Collibra and mentor teams.
Success Measures First 12 Months
- Collibra platform live with core real estate domains onboarded.
- Business glossary adopted across key business units.
- Formal data ownership established for critical datasets.
- Measurable improvement in metadata completeness and data quality visibility.
- Governance operating model embedded into daily business processes.
Job Opportunity: Data Product Manager
Location: Cleveland, Ohio/ Pittsburgh, Pennsylvania Hybrid
Duration: Full-Time
Key Responsibility
Product Ownership & Strategy
- Own end-to-end lifecycle of assigned data products, including vision, strategy, roadmap, and delivery.
- Collaborate with business units, analytics teams, and technology partners to prioritize features and enhancements.
- Define and track key product success metrics and adoption KPIs.
- Advocate for data products across the organization, ensuring alignment with enterprise data governance and cloud/data strategy initiatives.
- Stakeholder Engagement
- Act as the primary liaison between business stakeholders and data engineering/analytics teams.
- Gather and translate business requirements into actionable data product specifications.
- Facilitate cross-functional collaboration to resolve trade-offs and dependencies.
Data Governance & Quality
- Ensure data products comply with regulatory, security, and privacy requirements.
- Define and enforce data quality standards, lineage, and observability metrics.
- Collaborate with Data Governance, Risk, and IT Security teams to maintain compliance and audit readiness.
Technical Leadership
- Understand and leverage modern data technologies (e.g., relational databases, data warehouses, data lakes, ETL pipelines, cloud platforms, APIs, BI tools).
- Collaborate with data engineering teams on architecture, modeling, and platform decisions.
- Evaluate emerging technologies and recommend innovations to improve data products and processes.
Execution & Delivery
- Drive delivery of data products using agile methodologies.
- Prioritize backlog, manage sprints, and ensure timely delivery of features.
- Monitor and measure product performance, adoption, and business impact.
- Thought Leadership
- Contribute to the overall data product management framework and best practices within the bank.
- Promote a culture of data-driven decision-making and product-centric thinking.
Required Qualifications
- 8+ years of experience in data product management, data strategy, or analytics roles; experience in banking/financial services preferred.
- Strong understanding of core banking products (e.g., deposits, loans, payments) and associated operational data flows.
- Solid knowledge of data architecture, warehousing, BI, analytics, and cloud platforms.
- Proven ability to manage multiple data products simultaneously.
- Excellent communication, stakeholder management, and leadership skills.
- Experience with Agile/Scrum methodologies and data governance frameworks.
- Bachelor's degree in Computer Science, Information Systems, Finance, or related field; advanced degree preferred.
Preferred Qualifications
- Hands-on experience with HiveQL, SQL, Tableau
- Understanding of regulatory reporting requirements (e.g., CCAR, FR Y-14, Basel).
- Exposure to semantic layers, or enterprise data product management frameworks.