gaia is an open source automation platform which makes it easy and fun to build powerful pipelines in any programming language. Based on HashiCorp's go-plugin and gRPC, gaia is efficient, fast, lightweight and developer friendly. Gaia is currently alpha! Do not use it for mission critical jobs yet!
Develop pipelines with the help of SDKs (currently only Go) and simply check-in your code into a git repository. Gaia automatically clones your code repository, compiles your code to a binary and executes it on-demand. All results are streamed back and formatted to a user-friendly graphical output.
Automation Engineer, DevOps, SRE, Cloud Engineer, Platform Engineer - they all have one in common: The majority of tech people are not motivated to take up this work and they are hard to recruit.
One of the main reasons for this is the abstraction and poor execution of many automation tools. These automation tools come with their own configuration (YAML syntax) specification or limit the user to one specific programming language. Testing is nearly impossible because most automation tools lack the ability to mock services and subsystems. Even tiny things, for example parsing a JSON file, are sometimes really painful because external, outdated libraries were used and not included in the standard framework.
We believe it's time to remove all these abstractions and come back to our roots. Are you tired of writing endless lines of YAML-code? Are you sick of spending days forced to write in a language that does not suit you and is not fun at all? Do you enjoy programming in a language you like? Then gaia is for you.
Gaia is based on HashiCorp's go-plugin. It's a plugin system that uses gRPC to communicate over HTTP2. HashiCorp developed this tool initially for Packer but it's now heavily used by Terraform, Nomad, and Vault too.
Pipelines can be written in any programming language (gRPC support is a prerequisite) and can be compiled locally or simply over the build system. Gaia clones the git repository and automatically builds the included pipeline.
After a pipeline has been started, all log output from the included jobs are returned back to gaia and displayed in a detailed overview with their final result status.
Gaia uses boltDB for storage. This makes the installation step super easy. No external database is currently required.
The installation of gaia is simple and often takes a few minutes.
The following command starts gaia as a daemon process and mounts all data to the current folder. Afterwards, gaia will be available on the host system on port 8080. Use the standard user admin and password admin as initial login. It is recommended to change the password afterwards.
docker run -d -p 8080:8080 -v $PWD:/data gaiapipeline/gaia:latest
It is possible to install gaia directly on the host system. This can be achieved by downloading the binary from the releases page.
gaia will automatically detect the folder of the binary and will place all data next to it. You can change the data directory with the startup parameter --homepath if you want.
Writing a pipeline is easy as importing a library, defining a function which will be the job to execute and serving the gRPC-Server via one command.
Here is an example:
package main
import (
"log"
sdk "github.com/gaia-pipeline/gosdk"
)
// This is one job. Add more if you want.
func DoSomethingAwesome() error {
log.Println("This output will be streamed back to gaia and will be displayed in the pipeline logs.")
// An error occured? Return it back so gaia knows that this job failed.
return nil
}
func main() {
jobs := sdk.Jobs{
sdk.Job{
Handler: DoSomethingAwesome,
Title: "DoSomethingAwesome",
Description: "This job does something awesome.",
// Increase the priority if this job should be executed later than other jobs.
Priority: 0,
},
}
// Serve
if err := sdk.Serve(jobs); err != nil {
panic(err)
}
}
Like you can see, pipelines are defined by jobs. Usually, a function represents a job. You can define as many jobs in your pipeline as you want.
At the end, we define a jobs array that populates all jobs to gaia. We also add some information like a title, a description and the priority.
Priority is really important and should always be used. Priority defines the order of execution, similar to Unix nice level. If, for example, job A has a higher priority (decimal number) as job B, job A will be executed after job B. If two or more jobs have the same priority, they will be executed simultaneously.
That's it! Put this code into a git repository and create a new pipeline via the gaia UI. Gaia will compile it and add it to it's store for later execution.
Please find a bit more sophisticated example in our go-example repo.
Please find the docs at https://docs.gaia-pipeline.io. We also have a interesting tutorials section over there. For example, Kubernetes deployment with vault integration.
Gaia is currently in alpha version. We extremely recommend you not to use gaia for mission critical jobs and for production usage. Things will change in the future and essential features may break.
One of the main issues currently is the lack of unit- and integration tests. This is on our to-do list with high priority.
It is planned that other programming languages should be supported in the next few months. It is up to the community to decide which languages will be supported next.
Gaia can only evolve and become a great product with the help of contributors. If you like to contribute, please have a look at our issues section. We do our best to mark issues for new contributors with the label good first issue.
If you think you found a good first issue, please consider this list as a short guide:
- If the issue is clear and you have no questions, please leave a short comment that you start working on this. The issue will be usually blocked for two weeks to solve it.
- If something is not clear or you are unsure what to do, please leave a comment so we can add further discription.
- Make sure that your development environment is configured and setup. You need Go installed on your machine and also nodeJS for the frontend. Clone this repository and run the make command inside the cloned folder. This will start the backend. To start the frontend you have to open a new terminal window and go into the frontend folder. There you run npm install and then npm run dev. This should automatically open a new browser window.
- Before you start your work, you should fork this repository and push changes to your fork. Afterwards, send a merge request back to upstream.
If you have any questions feel free to contact us on gitter.