Connect

Get in Touch

We'd love to hear from you. Reach out today!

Thank you! We appreciate your message.
Oops! There was an error. Please try again.
Careers

Open Positions

Join our team and contribute to innovative AI solutions that prioritise sustainability.

Senior ML / DevOps Engineer – AI Scheduling & Optimisation
Engineering

Why Project Ohm?

AI’s energy appetite is exploding just as renewable power is being curtailed and wasted. Project Ohm flips that equation: we run high-performance AI workloads next to stranded clean energy that would otherwise go unused. Our orchestration platform dynamically schedules GPU jobs to follow real-time price and renewable availability—cutting costs and emissions at scale.

Backed by Australian and international investors and selected to represent Australia at the 2025 UN AI for Good Summit, we’re now hiring the brainpower to build the optimisation engine at the heart of our vision.

The Challenge

Design and deliver a rules-based, energy-aware scheduler that places thousands of containerised AI jobs across a decentralised fleet of GPU nodes—while reacting in milliseconds to wholesale market signals, network constraints, and SLA tiers.

Why this role matters

Project Ohm’s platform is built around a hub-and-spoke “islands” architecture: many autonomous GPU pods (Kubernetes clusters) positioned at renewable-rich sites, all steered by a central control plane. Your code will power the Custom Rule/Algorithm Engine that decides—every few seconds—which pod gets which job based on energy price, carbon intensity, GPU type and SLA.

What you’ll tackle

Design & ship the scheduling core

Extend our Python/Go engine to drive Kueue + MultiKueue for gang-scheduling thousands of AI/HPC jobs across clusters.
Encode cost-/carbon-aware heuristics and reinforcement-learning loops that react to live WEM/AEMO price feeds.
Orchestrate at planet scale

Work with Rafay KOP to template fleet-wide blueprints, automate upgrades and enforce policy guard-rails.
Leverage Pulumi IaC (wrapping Terraform) so infrastructure, network (via Netris) and app logic live in one repo.
Keep data moving, not waiting

Optimise Dragonfly P2P and NVIDIA AIStore caches so containers and datasets hit the GPUs in seconds, not minutes.
Make the network disappear

Exploit Cilium eBPF + Cluster Mesh to link clusters, enforce zero-trust L3–L7 policy and visualise flows with Hubble/Tetragon.
Measure what matters

Instrument utilisation, queue latency, $/MWh saved and tCO₂e avoided; surface it in Grafana & customer portals.

About you

6+ years building distributed systems, including Kubernetes-native scheduling or OR/ML optimisation.
Deep hands-on with at least three of: Kueue/MultiKueue, Slurm, Kubeflow Trainer, Argo CD, GPU Operators, Pulsar/Kafka, Ray, Spark.

Fluent in Python and Go; comfortable turning Jupyter notebooks into production micro-services.

Strong DevOps chops: GitHub Actions, container-first CI/CD, observability stacks, chaos engineering.

You think latency, bandwidth and egress dollars before writing code, and can explain eBPF to a junior in five minutes.

Bonus points for: energy-market modelling, e-mobility or micro-grid optimisation, eXtended Berkeley Packet Filter wizardry, or having run infra in remote/harsh environments.

What we offer

Mission with leverage – every algorithmic improvement cuts both carbon and cost for AI workloads at scale.
Early-team equity – help define the platform and share in its upside.
Remote flexibility – fly-in sprints to Perth’s renewables-powered HQ when it adds value.
Growth budget – pick the GPU workstation, conference or course that accelerates you.
Founder access – report directly to the CEO; shape roadmap, hiring and open-source engagement.

How to apply

This role is open to applicants located anywhere in Australia.
Send your CV/GitHub plus <300 words on how you’d evolve MultiKueue to handle spot-price spikes to talent@projectohm.com

Onsite / Remote
Full Time

Introduction

In today's rapidly evolving technological landscape, the integration of artificial intelligence with sustainable practices is more crucial than ever. At the forefront of this movement is a new paradigm of energy-aware AI computing, which not only enhances performance but also prioritises environmental stewardship. This blog post delves into the innovative approaches being adopted to ensure that AI systems contribute positively to our planet.

The importance of energy efficiency in AI cannot be overstated. As AI applications grow, so does their energy consumption. By harnessing advanced algorithms and optimised hardware, we can significantly reduce the carbon footprint associated with AI operations. This shift towards sustainability is not just a trend; it is a necessity for a greener future.

Image caption goes here

As we explore the intersection of AI and sustainability, it becomes evident that collaboration across industries is essential. Tech companies, researchers, and policymakers must work together to create frameworks that support energy-efficient AI development. This collective effort will pave the way for innovations that align technological advancement with ecological responsibility.

"The future of AI is not just about intelligence; it's about being intelligent with our resources. We must innovate responsibly to ensure a sustainable tomorrow."

The journey towards sustainable AI is filled with challenges, yet the potential rewards are immense. By investing in research and development, companies can unlock new solutions that not only enhance their operational efficiency but also contribute to a healthier planet. The time to act is now, and the opportunities for impactful change are within our grasp.

Conclusion

In conclusion, the path to sustainable AI computing is one that requires commitment and collaboration. As we navigate this exciting frontier, let us remain focused on creating technologies that not only advance our capabilities but also protect our environment. Together, we can forge a future where AI and sustainability go hand in hand, ensuring a better world for generations to come.

As we move forward, it is essential to stay informed and engaged with the latest developments in this field. By doing so, we can all play a part in shaping a sustainable future, one innovation at a time.

John Doe
AI Specialist, Tech Corp