Skip to main content

Building API with DataTrucker.IO

· 6 min read


DataTrucker.IO is a simple no-code / less-code API Backend completely free and licensed under apache v2.

Datatrucker.IO is a product capable of reading simple json/yaml configs and building the code necessary for converting it into an API. Along with building the code, it will also host the code base on a nodejs server , i.e. it will immediately make it available for consumption.

DataTrucker is capable of removing the most common activities which a developer needs to do on every new project A few of such common activities are

  • Creating an API endpoint with a specified business logic (using simple plugins)
  • Applying standard RBAC
  • Applying Authorization logic
  • Applying hardening on endpoints
  • Log management
  • Connecting to variety of systems
  • Modularizing business logic
  • The best of doing it with little to no code

Let get started#

Today in this article we will go through installation of datatrucker on openshift and building the first API for a postgres database. The process is similar in kubernetes environment .

Step 1: Create a Namespace called trucker#

oc new-project trucker

Step 2: Downloading and Install the Application#

DataTrucker.IO is available in operator hub and can be added to your cluster as an operator


Step 3: Navigate into the operators#

  • Click on the Installed Operators and open the operator "DataTrucker.IO"


Step 4: Create a DataTrucker Config by running the the yaml object#

Create a pvc for a Database backend. Note: The A postgres DB provided using crunchydata containers is for getting started, for production workload we would recommend a hardened geo redundant DB

  1. Create a pvc called samplepvc

  2. Create an instance of DatatruckerConfig Object

  3. Before you click create ensure TempDB.enabled is true in the DatatruckerConfig Object . this is required for proto typing the below demo

Sample is available here: GITLAB

oc apply -f DatatruckerConfig.yaml

Lets understand what a Kind: DatatruckerConfig is

The Config Object creates the following

A postgresDB backend#

we provide a temporary non-hardended DB from crunchydata and can be created by enabling the following in the Datatrucker Config . For Production workload, we would recommend an hardened Geo redundant database

  TempDB:    enabled: true    pvc: samplepvc

A DB Configuration to use as backend#

In production systems, you would use a geo redundant postgres database

    user: testuser    password: password    databasename: userdb    hostname: db    type: pg    port: 5432

Crypto Configuration to use as backend#

 API:    cryptokeys: |-       ....

Detailed information here

API server Backends Configuration to use as backend#

 API:    name: API    loginServer: |-       ....    managementServer: |-       ....    jobsServer: |-       ....

Step 5: Create a Login and management End points#


This creates and endpoint for obtaining login token

apiVersion: DatatruckerFlowmetadata:  name: login-endpointspec:  Type: Login    DatatruckerConfig: < the name of the config object created in step 4 >

Managemnent Endpoint#

This creates and endpoint for for RBAC management and Credentials creation

apiVersion: DatatruckerFlowmetadata:  name: management-endpointspec:    Type: Management    DatatruckerConfig: < the name of the config object created in step 4 >

Note: this will create the deployments and service endpoints for both the UI and Management API

Step 6: Expose the management endpoint#

Expose the routes

$ oc get svc | grep endpointlogin-endpoint                                            ClusterIP    <none>        80/TCP     3m43smanagement-endpoint                                       ClusterIP   <none>        80/TCP     3m29smanagement-endpoint-ui                                    ClusterIP    <none>        80/TCP     3m28s

$ oc expose svc exposed
$ oc expose svc exposed

$ oc get routes      NAME                     HOST/PORT                                         PATH   SERVICES                 PORT   TERMINATION   WILDCARDlogin-endpoint           login-endpoint-trucker.apps-crc.testing                  login-endpoint           8080                 Nonemanagement-endpoint-ui   management-endpoint-ui-trucker.apps-crc.testing          management-endpoint-ui   9080                 None

Step 7: Login to the UI via a browser#

Create an Admin User




Step 8: Lets create a Postgres Credential for the API#

Till now we were installing, lets switch to building APIs

create a Postgres credentials to the database of your choice

  1. Expand the Left navigation bar
  2. Select Credentials.
  3. Open Postgres Credentials Pane.
  4. Click on Create Credentials
  5. Enter your DBs details

Login to the UI via a browser

Step 9: Lets create a Postgres API#

Create a Flow object with below job spec

The below spec creates the following

  1. A new micrservice to host the API
  2. the microservice will have 2 APIs on its route i.e
    1. postgres1
      • get current date and user sent parameterinto the SQL
      • is a post request
      • input sanitization for the userinput variable "userinput"
    2. postgres2
      • gets list of table available
      • is a get request
---apiVersion: DatatruckerFlowmetadata:  name: my-first-apispec:  DatatruckerConfig: datatruckerconfig-sample  JobDefinitions:    - credentialname: db   < cred name from step 8 >      job_timeout: 600      name: postgres1      restmethod: POST      script: 'select ''[[userinput]]'' as userinput; '  < query you want to execute>      tenant: Admin      type: DB-Postgres      validations:        properties:          userinput:            maxLength: 18            pattern: '^[a-z0-9]*$'            type: string        type: object    - credentialname: db < cred name from step 8 >      job_timeout: 600      name: postgres2      restmethod: GET      script: select * from information_schema.tables < query you want to execute>      tenant: Admin      type: DB-Postgres  Type: Job

Now search for the service

$. oc get svc | grep my-first-api my-first-api                                              ClusterIP   <none>        80/TCP     45s
$. oc expose svc exposed
$. oc get routes  | grep my-first-apimy-first-api             my-first-api-trucker.apps-crc.testing                    my-first-api             8080                 None

Now you have a URL lets go test it out

The URL will bel http://<your api route>/api/v1/jobs/<name of the JobDefinitions defined in the yaml>

In the above example 2 Job Definitions were created

  • postgres1 of type POST
  • postgres2 of type GET

Step 10: Test out your APIs#

Get a Login token from the login endpoint

curl --location --request POST 'http://login-endpoint-trucker.<wilcard.domain>/api/v1/login' \--header 'Content-Type: application/json' \--data-raw '{ "username": "xxx", "password": "xxxxxxxx", "tenant": "Admin"}'
Response:{    "status": true,    "username": "xxx",    "token": "xxxxxxxxxxxx"}

Now use the login token against your APIs

The first one#

curl --location --request POST 'http://my-first-api-trucker.<wilcard.domain>/api/v1/jobs/postgres1' \--header 'Authorization: Bearer xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx \--header 'Content-Type: application/json' \--data-raw '{    "userinput": "myfirstresponse"}'
Response:{    "reqCompleted": true,    "date": "2021-09-05T22:05:58.064Z",    "reqID": "req-3w",    "data": {        "command": "SELECT",        "rowCount": 1,        "oid": null,        "rows": [           .............

The second one#

curl --location --request GET 'http://my-first-api-trucker.<wilcard.domain>/api/v1/jobs/postgres2' \--header 'Authorization: Bearer xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx' 
Response:{    "reqCompleted": true,    "date": "2021-09-05T22:03:58.389Z",    "reqID": "req-35",    "data": {        "command": "SELECT",        "rowCount": 185,        "oid": null,        "rows": [            {                " .......

Watch the quick elevator pitch to understand

Datatrucker 1.0

· One min read

Release 1.0#

Launch of Datatrucker IO#

A Simple API tool which allows API development with no/less code

DB APIS – Oracle / Postgres / MYSQL / MSSQL / SQL-LIte#

knowledge-base/Database API

Simple User management#



Introducing API Chaining#


Production Hardening Guide#


Conversion of UI to react.js#