Full-Stack Software Engineer (Data Pipeline Focus, SWE1) [D.25.0131]

  • Full Time
  • Maryland
  • This position has been filled

Requires Top Secret/SCI with Full Scope Poly

Description: Join a high-impact, mission-driven team as a Full-Stack Software Engineer, supporting a mature and well-established technical organization. You’ll be a member of a 6-person team, which plays a critical role within a broader 20+ person development organization. The team develops and maintains data flows, data processing and transformation workflows, and ensures data integrity. The overall team supports a robust web-based platform that integrates diverse publicly available information (PAI) sources into a powerful analytical tool used by hundreds of mission customers. In this role, you’ll contribute across the stack, developing ingest pipelines, building scalable REST APIs, and enhancing user-facing visualizations. The platform supports large-scale data ingestion, complex queries, and interactive analysis. You’ll collaborate closely with other sub-teams on the project to ensure end-to-end functionality and performance. We’re looking for a developer who is excited to contribute to all aspects of this system and who also brings an interest in improving team processes and tooling, particularly in ways that help us integrate new data sources more efficiently and deliver faster. This long-running software effort is known for its technical stability, team cohesion, and deep integration into mission-critical systems, offering you the opportunity to make meaningful contributions in a fast-paced, collaborative environment.

Responsibilities:

  • Contribute to the development and continual improvement of a mature software system; including code, diagrams, and tests.
  • Leverage development and design patterns to ensure the product’s scalability, maintainability, and long-term success.
  • Understand API-driven microservice design patterns, NoSQL databases, dataflow tools (Apache NiFi), and SpringBoot applications.
  • Contribute to all parts of the data lifecycle, from collection to processing and transformation to storage, and facilitate presentation to analysts in our UI.
  • Maintain a team player mentality as a collaborative member of a fast-paced, structured team.

Skills Requirements:

  • Java experience.
  • Data wrangling or processing experience (discovery, mining, cleaning, exploration, modeling, structuring, enriching, and validating JSON data).
  • Familiarity with Git for version control and Maven for build automation.
  • Comfortable working in a Linux development environment.
  • Demonstrated willingness and ability to learn new tools, technologies, and workflows.
  • Excellent communication and teamwork skills.

Nice to Haves:

  • Dataflow experience (e.g., Apache Nifi or similar tools).
  • Experience with NoSQL databases (e.g., Elasticsearch, MongoDB, Redis, Dgraph, etc).
  • SpringBoot Rest APIs and Spring libraries (Spring Security, Spring Data, etc.).
  • Scripting with Bash, Python, and/or Groovy.
  • Experience with AWS services (EC2, S3, Lambda).
  • Familiarity with CI/CD tools (e.g., GitLab CI/CD, Jenkins) and automated testing (e.g., JUnit).
  • Experience using Atlassian tools, including Jira and Confluence, for task tracking and documentation.
  • Hands-on experience with containerization technologies such as Docker and Kubernetes.
  • Experience with production CNO capabilities and operations.

YOE Requirement: 3 yrs., B.S. in a technical discipline or 4 additional yrs. in place of B.S

Scroll to Top