Platform Engineer
Centricity
Job Description
Job Title: Platform Engineer Location: Austin, TX / In-office / Hybrid Position Type: Exempt / Salaried Salary: $90,000 to $110,000 Role Overview Catalyst is hiring a Platform Engineer to design and operate internal systems that support data infrastructure, integrations, and automation across our operational stack. This role sits at the intersection of data engineering, platform architecture, and automation systems. You will build integrations across multiple SaaS platforms, design scalable data pipelines, models, and ingestion systems, and develop automation frameworks that reduce manual operational work.
Success in this role means our internal systems operate as stable, reliable infrastructure. Data moves consistently between tools, workflows become automated, and teams can rely on the underlying systems to support day‑to‑day operations. This is a U.S.-based role, with a strong preference for engineers located in Austin, Texas, who can collaborate closely with the team.
Remote candidates within the U.S. will still be considered. You will report to the Product Owner for the platform systems and collaborate closely with analytics, strategy, and developers. What You’ll Do Design and maintain integrations across operational tools including Accelo, BigQuery, Looker Studio, Notion/Podio, digital ads platforms, CMS’s, and automation platforms.
Build and maintain data pipelines that normalize operational, marketing, and reporting data into a centralized data warehouse. Architect automation workflows that eliminate repetitive production, reporting, and campaign management tasks. Develop internal platform utilities, APIs, and scripts that support reporting, analytics, and operational systems.
Ensure platform reliability, including monitoring, logging, and data integrity across the stack. Collaborate with analytics, strategy, and product teams to translate operational needs into scalable platform capabilities. Continuously identify opportunities to replace manual workflows with reusable infrastructure and automation systems.
Example Problems You Might Work On Examples of the kinds of problems this role may tackle include: Designing a reliable data pipeline that synchronizes data across multiple SaaS systems while maintaining data integrity and observability. Replacing a manual operational workflow with an automated system that coordinates APIs, data transformations, and reporting outputs. Building an internal service that exposes normalized operational data through APIs for analytics and reporting tools.
Improving the reliability and monitoring of existing integrations, including alerting and failure recovery. Creating reusable infrastructure for workflow automation and cross‑system data synchronization. You may not be handed tickets for isolated tasks.
You will be expected to design systems that solve operational problems in durable ways. Required Skills & Experience 5+ years building production‑grade data pipelines, integrations, or internal platforms. Strong experience working with cloud data warehouses (BigQuery strongly preferred).
Experience integrating SaaS platforms via APIs and automation frameworks. Experience building or supporting BI and reporting pipelines (Looker, Looker Studio, Tableau, or similar). Experience designing scalable data models and data pipelines.
Strong systems‑thinking mindset and ability to architect durable infrastructure. Experience working with automation tooling (Make, Zapier, n8n, or similar). Ideal Experience Candidates do not need all of these, but strong applicants typically have several.
Experience building internal developer platforms or operational systems Experience integrating CRM, marketing, or operational SaaS systems Experience designing analytics‑ready data models Experience implementing observability, logging, and monitoring systems Experience working with modern automation platforms such as Make, n8n, or similar orchestration tools AI‑Native Tooling (Nice to Have) Catalyst is actively experimenting with AI‑assisted infrastructure and automation systems. Experience in this area is a plus. Building or integrating AI agents into operational workflows Working with MCP (Model Context Protocol) servers Developing AI‑assisted automation pipelines Experience with agent orchestration frameworks Using AI tools to accelerate development, data analysis, or workflow automation Experience with LLM‑enabled tooling inside internal platforms What We Value in Engineers We value engineers who: Think in systems, not scripts.
Can translate messy operational problems into clean technical solutions. Take ownership of the reliability and maintainability of the systems they build. Communicate clearly with both technical and non‑technical teammates.
You don’t need experience in our exact tools, but you should be comfortable designing systems that integrate multiple services and data sources. How We Work Catalyst operates at the intersection of craft, performance, and operational reliability. In practice, that means: We build systems that remove friction from complex workflows.
We replace repetitive operational work with infrastructure. We prioritize clear thinking, strong architecture, and durable systems. This role is ideal for engineers who enjoy building systems that quietly make complex organizations run better. #J-18808-Ljbffr