Eter — A wordplay on Heterogeneous and Ether.
Why Eter? • Getting the Source Code and Building Eter • License
Warning
Eter is currently in the early stages of development. The language design and implementation are subject to significant changes, and the implementation is not started yet.
While The Eter Reference serves as the primary source of language specification (source code), including its syntax and semantics, the API Reference provides the doxygen-generated documentation for the Eter compiler's C++ API, which is intended for contributors and advanced users interested in extending or interfacing with the compiler.
// ML models as first-class citizens. The @model attribute orchestrates the native
// loading and linking of pre-trained assets directly into the program's binary.
// The function signature acts as a static hardware contract: the compiler enforces
// tensor shape integrity and memory residency (e.g., @host, @gpu), ensuring
// zero-overhead data transitions and eliminating runtime shape mismatches.
@model<TF /* TensorFlow */, version = V1>("mobilenet_v2")
extern fn infer(x: tsor[f32; [1, 224, 224, 3]] @host) -> tsor[f32; [1, 1000]] @host;
fn main() {
let input tsor[f32; [1, 224, 224, 3]] @host = tsor::from_image("dog.jpg");
let output tsor[f32; [1, 1000]] = infer(input);
print("Inference completed. Output shape: ", output.shape());
}Modern software development increasingly relies on heterogeneous computing, yet writing performant code across diverse hardware remains a significant challenge. Existing solutions—ranging from libraries and compiler extensions to domain-specific and system programming languages—often face technical limitations or practical trade-offs. Currently, machine learning (ML) models are compiled via specialized tools like XLA, Glow, or TVM, making their integration into general-purpose languages such as Python, C++, or Rust difficult and often requiring wrappers that introduce overhead and complexity. Furthermore, achieving high performance across different architectures such as GPUs and specialized accelerators often demands a deep understanding of hardware-specific models, which compromises both efficiency and portability.
Eter is a new programming language designed to bridge these gaps. It provides a high-level, expressive syntax that compiles efficiently to a wide range of targets, including CPUs, GPUs, and specialized accelerators. Eter empowers developers to write native GPU kernels and manage distributed resources—such as device meshes and sharded tensors—directly within the language. In Eter, machine learning models are first-class citizens, making inference on a pre-trained model as seamless as a standard function call. Built on the LLVM and MLIR infrastructure, Eter leverages industry-leading optimization and code generation capabilities to deliver native performance with high-level elegance.
Consult the Getting Started with Eter page for information on building and running Eter. Eter currently expects LLVM/MLIR 22.x and a C++17-capable compiler toolchain.
For information on how to contribute to the Eter project, please take a look at the Contributing to Eter guide.
Contributors may also want to enable the repository Git hooks described in Contributing to Eter for local formatting, linting, and pre-push validation.
For IDE/LSP setup tips, including the recommended root .clangd file, see Getting Started with Eter.
Eter is licensed under the Apache License v2.0 with LLVM Exceptions. See the LICENSE file for more information.