Kublr Release 1.28.1 (2024-06-15)

Kublr Quick Start

To quickly get started with Kublr, run the following command in your terminal:

sudo docker run --name kublr -d --restart=unless-stopped -p 9080:9080 kublr/kublr:1.28.1

The Kublr Demo/Installer docker container can be run on ARM-based machines, such as MacBook M1.

Follow the full instructions in Quick start for Kublr Demo/Installer.

Overview

The Kublr 1.28.1 release introduces several new features and improvements, including:

  • Support for STS role in AWS secrets
  • Bumped go version to v1.22.3
  • Added k8s v1.27.4 and v1.28.10
  • AWS AmazonLinux AMI filer updated to use 2024 images

All Kublr components are checked for vulnerabilities using Aquasecurity trivy scaner. In addition to these major features, the release also includes various other improvements and fixes.

Supported Kubernetes Versions

VersionKublr AgentNotes
1.281.28.10-2Default version: v1.28.10
1.271.27.14-2
1.261.26.15-2
1.251.25.16-2Deprecated in 1.29.0
1.241.24.13-8End of support in 1.29.0

Important Changes

  • New versions of Kubernetes:

  • Deprecations:

    • Kubernetes v1.23 (v1.23.17/agent 1.23.17-6) has reached End of Support.
    • Kubernetes v1.23 (v1.24.13 by default) deprecated and will be removed in Kublr v1.29.0
  • STS role support in AWS credentials added, please refer to our solution portal for detailed examples usage: Integrate Kublr Cluster with AWS IAM

  • Kublr ControlPlane with empty hostname Ingress rule fixed

  • Go version bumped to v1.22.3

Components versions

Kublr Control Plane

ComponentVersion
Kublr Operator1.28.1
Kublr Control Plane1.28.1

Kublr Platform Features

ComponentVersion
Kubernetes
Dashboardv2.7.0
Kublr System1.28.1
LocalPath Provisioner (helm chart version)0.0.24
Ingress1.28.1
nginx ingress controller (helm chart version)4.8.0
cert-manager (helm chart version)1.13.2
Centralized Logging1.28.1
ElasticSearch7.10.2
SearchGuard53.6.0
Kibana7.10.2
SearchGuard Kibana plugin53.0.0
SearchGuard Admin7.10.2-53.6.0
OpenSearch (helm chart version)2.13.3
OpenSearch Dashboards(helm chart version)
RabbitMQ3.9.5
Curator5.8.1
Logstash7.10.2
Fluentd1.16.3
Fluentbit2.1.8
Centralized Monitoring1.28.1
Prometheus2.45.0 LTS
Kube State Metrics (helm chart version)5.16.4
AlertManager0.27.0
Grafana (helm chart version)7.3.5
Victoria Metrics
Cluster0.11.13
Agent0.10.3
Alert0.9.3

AirGap Artifacts List

To use Kublr in an airgap environment, you will need to download the following BASH scripts from the repository at https://repo.kublr.com:

You will also need to download the following Helm package archive and Docker images lists:

Supported Kubernetes Versions

v1.28

v1.27

v1.26

v1.25 (Deprecated in 1.29.0)

v1.24 (Deprecated in 1.28.0, End of support in 1.29.0)

Known Issues and Limitations

  • Kublr Helm manager have not Proxy server support

  • GCP CPD CSI driver can’t run on ARM instances with k8s prior 1.28.10 GCP compute-persistent-disk-csi-driver:v1.9.2 have not ARM manifest, and can’t be running on ARM based VM. Please use custom builded images in this case:

    spec:
      KublrAgentConfig:
        kublr:
          docker_image:
            gce_csi_pd_driver: registry.k8s.io/cloud-provider-gcp/gcp-compute-persistent-disk-csi-driver:v1.9.2
    
  • vSphere CSI driver’s can’t propogate volume in topology aware vCenter infrastructure.

    If you use CSI drivers with Topology in some cases new PVC/PV create failed with error:

    Warning ProvisioningFailed   22s (x6 over 53s) csi.vsphere.vmware.com_vsphere-csi-controller failed to provision volume with StorageClass "kublr-system": 
    rpc error: code = Internal desc = failed to get shared datastores for topology requirement: requisite:<segments:<key:"topology.csi.vmware.com/zone" value:"zone-key" >>
    preferred:<segments:<key:"topology.csi.vmware.com/zone" value:"zone-key" > > . Error: <nil>  
    Normal  ExternalProvisioning 14s (x5 over 53s) persistentvolume-controller waiting for a volume to be created, either by external provisioner "csi.vsphere.vmware.com" or manually created by system administrator
    

    In this case, you have to delete csinode resources in your k8s API and restart all csi-node pod’s

    # kubectl delete csinode --all
    # kubectl delete po -n kube-system -l app=vsphere-csi-node,role=vsphere-csi