Parrot Analytics is the global authority on media and entertainment intelligence, providing the strategic decision support that the world's leading studios, producers, streamers, investors, and government bodies rely on to de-risk content investment and maximize returns.
Trusted across the full media economy from studios and streaming platforms to film funds, sports leagues, and government bodies Parrot Analytics informs capital allocation, acquisitions, programming strategy, and IP valuation at the highest levels of the industry.
By measuring the demand and preferences of more than 2 billion audiences worldwide, Parrot Analytics AI platform quantifies the value of content, talent, franchises, and sports rights enabling partners to forecast revenue, assess risk, optimize portfolio strategy, and drive more predictable success.
About The Role
We are looking for a Senior Software Engineer with strong experience in core Java technologies, AWS cloud technologies, API integrations (REST/GraphQL), and building and supporting AI-driven workflows to join our Data Platform R&D team. Our platform captures the foundational signals that power Demand and the wider suite of analytics products built on top of it. This includes making millions of API calls, connecting to social and digital data streams, and processing billions of events every day.
As part of this team, you will design and build high-scale distributed systems that collect, process, and transform raw digital signals into reliable datasets used across analytics, forecasting, and AI models.
You will also play a key role in evolving our platform to support AI-driven workflows, including intelligent data enrichment, automation, orchestration, agentic AI development, and delivering model-ready datasets for machine learning and advanced analytics.
This role is ideal for engineers who enjoy working on large-scale data systems, distributed infrastructure, and solving complex problems where reliability, scale, performance, and AI-readiness matter.
What You'll Champion
- Build Large-Scale Data Infrastructure- Design, develop, deploy, and maintain large-scale data collection applications and pipelines that capture and process massive volumes of digital audience signals from global platforms
- Enable AI-Driven and Agentic Workflows- Build, integrate, and support AI-driven workflows that enable intelligent automation, data enrichment, orchestration, and scalable delivery of model-ready data, including agentic AI patterns aligned to an AI-driven development life cycle
- Own Critical Platform Services- Build and maintain critical platform services and infrastructure on AWS to enable the next generation of analytics products, product modules, and AI-powered capabilities
- Process Data at Massive Scale- Develop and support data aggregation pipelines where collected data is processed across Apache Spark clusters, transforming high-volume raw data into reliable, analytics-ready datasets
- Improve Platform Reliability and Performance- Help ensure platform services meet high standards for accuracy, availability, scalability, observability, resilience, and security while operating at large scale
- Shape Platform Architecture- Contribute to the evolution of the platform architecture, helping define patterns and systems that support future data products, AI capabilities, and high-throughput distributed processing
- Collaborate Across Disciplines- Work closely with engineers, data scientists, industry insights analysts, and product teams to turn complex data challenges into scalable technical solutions
Requirements
- 5+ years experience building backend systems with Core Java/Python, databases, ETL, SQL, and REST/GraphQL APIs
- Hands-on experience developing and supporting agentic AI workflows in production or near-production environments
- Experience with spec-driven development workflows, AI-assisted engineering patterns, and tooling that accelerates design, implementation, testing, and iteration
- Practical familiarity with AI coding and orchestration tools such as Amazon Kiro and Kiro CLI for structured specification, automation, and developer productivity
- Strong experience building high-throughput distributed applications handling both high request volumes and large-scale data flows
- Deep understanding of concurrency, multi-threading, distributed systems, and system design
- Experience working with JPA, ORM frameworks, and databases such as Postgres, MySQL, Redshift, and Elasticsearch
- Strong experience with AWS infrastructure and cloud services
- Strong collaboration and communication skills, with the ability to understand business needs and translate them into clear technical solutions
Benefits
- Build the Backbone of Global Entertainment Intelligence: Design and scale the data infrastructure that powers the industry's most trusted media analytics platformsupporting decisions made by studios, streamers, investors, and governments worldwide.
- Engineer at Massive Scale: Work with systems that process billions of audience signals daily, building distributed pipelines and services that transform raw global data into reliable insights powering forecasting, analytics, and AI models.
- Help Shape the AI Data Layer: Play a hands-on role in building AI-ready data infrastructure, enabling agentic workflows, intelligent data enrichment, and the next generation of machine learning and analytics capabilities.
- Solve Real Engineering Challenges: Tackle complex problems across distributed systems, high-throughput APIs, and large-scale data processing where performance, reliability, and scalability truly matter.
- Work Where Technology Meets Culture: Your engineering work ultimately influences how the entertainment industry values content, talent, and franchisesimpacting what audiences around the world watch, stream, and engage with.
- Collaborate Across Disciplines: Partner with other product teams and industry experts to translate complex audience signals into powerful technology that drives strategic decisions across the global media ecosystem