Level 2 – Technical Support Engineer

Full-Time

USA, Remote

1 opening

About the Role

LakeFusion is seeking a Technical Support Engineer (Level 2) to handle complex, escalated issues across our Master Data Management platform, built natively on the Databricks Data Intelligence Platform. In this role, you will take ownership of advanced troubleshooting and root cause analysis across the full platform stack, ensuring reliability and performance for enterprise customers.

You will work hands-on to diagnose and resolve issues related to data pipelines, entity matching, survivorship, and platform integrations. This includes analyzing logs, querying data directly, reproducing issues in isolated environments, and interpreting pipeline behavior to identify and resolve failures.

Working closely with engineering, QA, and product teams, you will escalate confirmed bugs with clear context, validate fixes, and contribute to improving system stability. You will also proactively monitor system health, support complex customer configurations, and enhance internal documentation and troubleshooting processes.

This is a highly analytical and self-directed role suited for someone who thrives in a fast-paced environment, where solving complex technical challenges and ensuring a seamless customer experience are central to success.

What you'll do

  • Own escalated tickets from Level 1, performing in-depth root cause analysis across the LakeFusion platform stack — from ingestion and blocking through entity matching, survivorship, and golden record output
  • Diagnose and resolve complex issues involving Databricks Workflows and notebooks, Delta table schema mismatches, Vector Search configuration and query failures, Spark job failures, and REST API integration errors
  • Reproduce customer issues in isolated environments; gather and analyze Spark logs, Databricks cluster event logs, and Delta transaction logs to isolate failure points
  • Query and analyze Databricks SQL or Unity Catalog tables directly to investigate data quality issues, deduplication anomalies, survivorship rule misfires, and match score discrepancies
  • Read and interpret LakeFusion Python pipeline code to understand execution context when triaging bugs or unexpected behavior
  • Collaborate with engineering and product teams to escalate confirmed bugs with clear reproduction steps, log evidence, and environment details; validate fixes in staging before customer delivery
  • Monitor system health, job run history, and data flow metrics to proactively surface and address issues before customers report them
  • Assist enterprise customers with complex configuration scenarios including multi-source matching, custom survivorship rules, cross-walk management, and Lakebase integration
  • Maintain and improve internal and external knowledge base documentation — known issues, troubleshooting runbooks, and best practices

What we're looking for

  • 3+ years of experience in technical support, application support, or a data-focused engineering role in a SaaS or cloud data environment
  • Working proficiency in SQL — comfortable writing investigative queries against Delta tables, reading query plans, and interpreting results in a Databricks SQL or Unity Catalog context
  • Hands-on experience with Databricks — running notebooks, reading Spark UI output, interpreting cluster logs, and understanding job/workflow configuration
  • Solid understanding of data management concepts: entity resolution, deduplication, master data, data quality, and pipeline architecture
  • Ability to read Python code confidently — not necessarily write production code from scratch, but enough to follow pipeline logic, understand function signatures, and interpret error tracebacks
  • Strong analytical and problem-solving skills with high attention to detail
  • Experience with ticketing and incident management tools (e.g., Zendesk, Jira, ServiceNow)
  • Ability to communicate complex technical issues clearly to both technical and non-technical audiences
  • Comfort working cross-functionally with engineering, QA, and product teams

Nice-to-have

  • Experience with Databricks Vector Search, Databricks Model Serving, or Unity Catalog
  • Familiarity with Delta Lake internals — transaction logs, CDF (Change Data Feed), schema evolution, MERGE behavior
  • Exposure to MDM platforms, entity matching concepts, or data stewardship workflows
  • Experience with Snowflake or Microsoft Fabric as alternative lakehouse platforms
  • Python scripting ability beyond reading — e.g., writing diagnostic scripts, notebook cells, or small utilities
  • Familiarity with Azure infrastructure (AKS, Azure Entra ID, Azure networking) given LakeFusion's deployment model
  • Experience supporting enterprise-level customers or high-availability data systems
  • Background in healthcare data, financial services data, or other regulated data domains
  • IT certifications (e.g., ITIL Foundation, Databricks Certified Associate)

About LakeFusion

LakeFusion is the modern Master Data Management (MDM) company. Global enterprises across industries ranging from retail to manufacturing and financial services rely on the LakeFusion platform to unify, govern, and deliver trusted data entities such as customers, products, suppliers, and employees. Built natively on the Databricks Lakehouse, LakeFusion creates a single source of truth that powers analytics and AI. LakeFusion enables organizations worldwide to accelerate innovation with trusted and governed data.

Join us

Help build the future of master data

Join a Databricks-native team building the trusted data foundation powering AI-ready enterprises.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.