Data Engineer – Toronto or Remote

Killi is currently seeking a Data Engineer that stands apart from the rest of the market that is interested in joining a true startup culture and be part of one of the best performing teams in the world.

Killi is a direct to consumer (D2C) platform that allows people to take back control of their personal data and get rewarded for sharing data with buyers. Killi directly tackles the issues of personal data breaches and massive corporations getting rich by selling their customers’ data in exchange for, well, nothing. Killi also provides a transparent market solution in the face of growing cries for government regulation around the collection and use of personal data.

We are a true start-up. It is hard, but you will learn a lot. We work fast, have big aspirations and have a lot of fun along the way.​

Who you are:

  • You have a passion for distributed computing, large data platforms, and event-based data
  • You are not fazed by complex, unstructured problems dealing with disparate data sets.
  • You have excellent communication skills to deliver your results with clients
  • You enjoy staying ahead of the curve with knowledge and interest in emerging technologies
  • You know how to find data, move it around, transform and fill in the gaps and implement your ideas through programming/statistical languages
  • You are the person who volunteers for new challenges, even in the face of uncertainty, and not always being the expert

What you will be responsible for:

  • Designing and developing Killi’s Data Platform which represents the first real-time, consented, multi-geographic compliant data ingestion, analysis, and insights platform in the world
  • Helping build the vision, strategy for the future state of analytics, dash-boarding, and reporting
  • Build and lead your projects end-to-end with clear and consistent client communications
  • Building and maintaining code in AWS Cloud with CI-CD practices.
  • Scaling existing architectures while designing, adapting, and evolving for new data sets and challenges
  • Processing, cleansing, and verifying the integrity of the data used for analysis and sale

What we need from you:

  • An undergraduate degree in Computer Science or equivalent from an accredited University
  • 3+ years in Software Engineering in challenging environments
  • Experience with Spark, Hadoop, or other  large scale computing data technologies
  • Experience with RedShift/EDW, and large cloud-based data storage facilities and data lakes
  • Expert level experience with Python and SQL
  • Experience with creating big data pipelines on AWS/GCP/Azure (we use AWS and Databricks)
  • Experience in AWS Technologies like S3, Redshift, Lambda, RDS, Elasticsearch etc.
  • Some experience in dealing with clients in B2B technical services relationships

What we would like to see as a bonus:

  • Experience with DataBricks and/or an understanding of how Spark works
  • Experience with Kafka, Kafka Connect, CDC, and AWS-MKS
  • Experience in building concurrent and reactive systems
  • Experience with systems and data at a massive scale
  • Experience with deployment tooling such as Docker, Jenkins and Kubernetes (EKS)
  • Experience leading and mentoring junior team members
  • Experience with Machine learning frameworks beyond linear regression
  • Experience in programmatic or digital advertising
  • Experience in real-time data processing

What you will get from us:

  • Competitive compensation with stock options in a publicly-traded company
  • Comprehensive, health, dental and vision plan
  • A great company culture based on transparency, integrity, and the drive to win
  • Be part of a diverse team
  • A fulfilling, challenging and flexible work experience
  • The opportunity for career growth and senior mentorship
  • Wellness and Professional Development funds
  • Stock Options

Interested candidates should apply through this link.