⚡ New

Software Engineer, Data (Core Engineering)

Dime Line Trading

ChicagoFull-timeMid LevelOn-site

Job Description

Job Description Job Description We are looking for a strong Software Engineer who specializes in data systems. In this role, you won't just be writing scripts; you will be building the core backend services, distributed systems, and robust infrastructure that power our data platform. If you approach data challenges with a software engineering mindsetand have a deep understanding of how data flows through complex systems, this role is for you.

What you'll do: Build Core Systems: Design, develop, and deploy highly scalable backend services, APIs, and distributed systems that support our data infrastructure. Manage Access Patterns: Architect scalable systems capable of handling diverse data access patterns (e.g., high-throughput writes, low-latency reads, heavy analytical scans) and optimize our existing data access layers. Construct Data Pipelines: Build and maintain fault-tolerant pipelines, leveraging industry best practices for data storage, retrieval, and processing.

Event-Driven Architecture: Design and implement robust event-driven platforms that ensure reliable data delivery and real-time processing capabilities. Champion Engineering Standards: Apply rigorous software engineering practices to data, including CI/CD, comprehensive testing (unit, integration, and end-to-end), and version control. Skills you need: Core Engineering Background: 4+ years of overall experience in backend software engineering, with a strong grasp of computer science fundamentals, data structures, and algorithms.

Data Expertise: 2+ years of dedicated experience in a data engineering capacity. Language Proficiency: Expert-level coding skills in Python, Java, Go, or Scala , along with advanced SQL capabilities. System Design: Proven experience building scalable systems and optimizing data access patterns for various downstream use cases.

Data Ecosystem Knowledge: Strong familiarity with industry best-practice solutions for: Cloud-based architecture (AWS, GCP, or Azure). Data storage and retrieval (e.g., relational, NoSQL, columnar databases). Event-driven platforms (e.g., Kafka, Kinesis, RabbitMQ).

Batch and stream data processing (e.g., Spark, Flink).

Posted Today

Related Jobs

Related Searches

Apply Now