iconik Microservices

iconik uses a microservices architecture that means that each service is a small contained unit of functionality concerned only with its business function and they are then composed to together to form the overall service.

Cloud Service Provider
Cloud Service Provider
ACLs
Microservices
[Not supported by viewer]
Region 1
Region 1
Zone 1
Zone 1<br>
Files
Cloud Storage
Multiple Buckets
[Not supported by viewer]
Vision
API
[Not supported by viewer]
Cloud Video
Intelligence API
Cloud Video<br/>Intelligence API
Cassandra
Cluster
[Not supported by viewer]
Persistent
Disk
Persistent<br/>Disk

API
Gateway

[Not supported by viewer]
Data Analysis
BigQuery
[Not supported by viewer]
Kubernetes
Kubernetes
Assets
Microservices
[Not supported by viewer]
Auth
Microservices
[Not supported by viewer]
Files
Microservices
[Not supported by viewer]
Jobs
Microservices
[Not supported by viewer]
Metadata
Microservices
[Not supported by viewer]
Search
Microservices
[Not supported by viewer]
Transcode
Microservices
[Not supported by viewer]
Users
Microservices
[Not supported by viewer]
Web
Microservices
[Not supported by viewer]
Elastic
Cluster
[Not supported by viewer]
RabbitMQ
Cluster
[Not supported by viewer]
Redis
Cluster
[Not supported by viewer]
Auth
ACLS
Requests
[Not supported by viewer]
Managed Docker Microservices instances
Managed Docker Microservices instances

This gives flexibility in deployment allows for scaling individual parts of the application either dynamically on-demand or as needed to meet the needs of customer load on the system.

Each microservice is packaged in a Docker container with a minimal Alpine Linux distribution only with the resources that it needs to maintain a small profile. These containers are built using an automated build system and deployed periodically as part of a release or a bug fix deployment.

Kubernetes is used to automate the deployment, scaling and management of the containers. On Google Cloud we utilise the Google Kubernetes Engine service to take care of the Kubernetes infrastructure.

Kubernetes is given a cluster of hosts (using Google Compute Engine on Google Cloud) on which to dynamically schedule the Docker containers into Pods. Kubernetes uses an internal IP address schema for communication which is not publicly accessible.

Ingress and Egress is through Kubernetes managed ingress object that is then load balanced with a L7 global load balancer running on Google Cloud's premium network tier.

Internal logging and monitoring of GKE, containers and compute engine nodes is managed by Google Cloud Stackdriver.

Microservice API Documentation

Each microservice is self-documenting of it's own API, and this is presented through the Application Gateway microservice and published at https://app.iconik.io/docs/

Learn more