# GoFr — full content dump > Concatenated plaintext of every public docs page on https://gofr.dev. > Intended for AI tools that ingest a single long context (Anthropic Files > API, OpenAI Assistants File Search, ChatGPT custom GPT knowledge, etc.) > rather than crawling page-by-page. > Generated: 2026-05-08T07:02:35.748Z > Site: https://gofr.dev · Repo: https://github.com/gofr-dev/gofr > For a smaller curated link index instead: https://gofr.dev/llms.txt > For an AI-coding-assistant primer: https://gofr.dev/AGENTS.md --- # Quick Start ## https://gofr.dev/docs/quick-start/add-rest-handlers # Add REST Handlers GoFr simplifies the process of implementing CRUD (Create, Read, Update, Delete) operations by enabling the automatic generation of handlers directly from Go structs. This feature eliminates the need for writing repetitive boilerplate code, allowing developers to focus on application logic. ## Default Behavior If the custom handlers ain't implemented on the struct, GoFr provides default handlers for each CRUD operation. These handlers handle basic database interactions: - **Create**: `/entity` Inserts a new record based on data provided in a JSON request body. - **Read**: - **GET**: `/entity` Retrieves all entities of the type specified by the struct. - **GET**: `/entity/{id}` Retrieves a specific entity identified by the {id} path parameter. - **Update**: `/entity/{id}` Updates an existing record identified by the {id} path parameter, based on data provided in a JSON request body. - **Delete** `/entity/{id}` Deletes an existing record identified by the {id} path parameter. > [!NOTE] > The registered routes will have the same name as the given struct, but if we want to change route name, we can implement `RestPath` method in the struct: ```go type userEntity struct { Id int `json:"id"` Name string `json:"name"` Age int `json:"age"` IsEmployed bool `json:"isEmployed"` } func (u *userEntity) RestPath() string { return "users" } ``` ## Overriding Default Handlers While the default handlers provide basic functionality, user might want to customize their behavior for specific use cases. The AddRESTHandlers feature allows user to override these handlers by implementing methods within the struct itself. ## Database Table Name By default, GoFr assumes the struct name in snake-case matches the database table name for querying data. For example, `UserEntity` struct matches `user_entity` database table, `cardConfig` struct matches `card_config` database table, etc. To change table name, you need to implement `TableName` method in the struct: ```go type userEntity struct { Id int `json:"id"` Name string `json:"name"` Age int `json:"age"` IsEmployed bool `json:"isEmployed"` } func (u *userEntity) TableName() string { return "user" } ``` ## Adding Database Constraints By default, GoFr assumes to have manual insertion of id for a given struct, but to support SQL constraints like `auto-increment`, `not-null` user can use the `sql` tag while declaring the struct fields. ```go type user struct { ID int `json:"id" sql:"auto_increment"` Name string `json:"name" sql:"not_null"` Age int `json:"age"` IsEmployed bool `json:"isEmployed"` } ``` Now when posting data for the user struct, the `Id` we be auto-incremented and the `Name` will be a not-null field in table. ## Benefits of Adding REST Handlers of GoFr 1. Reduced Boilerplate Code: Eliminate repetitive code for CRUD operations, freeing user to focus on core application logic. 2. Consistency: Ensures consistency in CRUD operations across different entities by using a standardized approach. 3. Flexibility: Allows developers to customize CRUD behavior as per application requirements, providing flexibility and extensibility. ## Example ```go package main import ( "gofr.dev/examples/using-add-rest-handlers/migrations" "gofr.dev/pkg/gofr" ) type user struct { Id int `json:"id"` Name string `json:"name"` Age int `json:"age"` IsEmployed bool `json:"isEmployed"` } // GetAll : User can overwrite the specific handlers by implementing them like this func (u *user) GetAll(c *gofr.Context) (any, error) { return "user GetAll called", nil } func main() { // Create a new application a := gofr.New() // Add migrations to run a.Migrate(migrations.All()) // AddRESTHandlers creates CRUD handles for the given entity err := a.AddRESTHandlers(&user{}) if err != nil { a.Logger().Fatal(err) } // Run the application a.Run() } ``` In this example, we define a user struct representing a database entity. The `GetAll` method in the provided code demonstrates how to override the default behavior for retrieving all entities. This method can be used to implement custom logic for filtering, sorting, or retrieving additional data along with the entities. ## Few Points to Consider: **1. Passing Struct by Reference** The struct should always be passed by reference in the method `AddRESTHandlers`. **2. Field Naming Convention** GoFr assumes that struct fields in snake_case match the database column names. * For example, the `IsEmployed` field in the struct matches the `is_employed` column in the database. * Similarly, the `Age` field matches the `age` column. **3. Primary Key** The first field of the struct is typically used as the primary key for data operations. However, this behavior can be customized using GoFr's features. **4. Datatype Conversions** | Go Type | SQL Type | Description | |---|---|---| | `uuid.UUID` (from `github.com/google/uuid` or `github.com/satori/go.uuid`) | `CHAR(36)` or `VARCHAR(36)` | UUIDs are typically stored as 36-character strings in SQL databases. | | `string` | `VARCHAR(n)` or `TEXT` | Use `VARCHAR(n)` for fixed-length strings, while `TEXT` is for longer, variable-length strings. | | `int`, `int32`, `int64`, `uint`, `uint32`, `uint64` | `INT`, `BIGINT`, `SMALLINT`, `TINYINT`, `INTEGER` | Use `INT` for general integer values, `BIGINT` for large values, and `SMALLINT` or `TINYINT` for smaller ranges. | | `bool` | `BOOLEAN` or `TINYINT(1)` | Use `BOOLEAN` (supported by most SQL databases like PostgreSQL, MySQL) or `TINYINT(1)` in MySQL (where `0` is false, and `1` is true). | | `float32`, `float64` | `FLOAT`, `DOUBLE`, `DECIMAL` | Use `DECIMAL` for precise decimal numbers (e.g., financial data), `FLOAT` or `DOUBLE` for approximate floating-point numbers. | | `time.Time` | `DATE`, `TIME`, `DATETIME`, `TIMESTAMP` | Use `DATE` for just the date, `TIME` for the time of day, and `DATETIME` or `TIMESTAMP` for both date and time. | > #### Check out the example on how to add REST Handlers in GoFr: [Visit GitHub](https://github.com/gofr-dev/gofr/tree/main/examples/using-add-rest-handlers) --- ## https://gofr.dev/docs/quick-start/configuration # Configurations GoFr simplifies configuration management by reading configuration via environment variables. Application code is decoupled from how configuration is managed as per the {%new-tab-link title="12-factor" href="https://12factor.net/config" %}. Configs in GoFr can be used to initialize datasources, tracing, setting log levels, changing default HTTP or metrics port. This abstraction provides a user-friendly interface for configuring user's application without modifying the code itself. To set configs create a `configs` directory in the project's root and add `.env` file. Follow this directory structure within the GoFr project: ```dotenv my-gofr-app/ ├── configs/ │ ├── .local.env │ ├── .dev.env │ ├── .staging.env │ └── .prod.env ├── main.go └── ... ``` By default, GoFr starts HTTP server at port 8000, in order to change that we can add the config `HTTP_PORT` Similarly to Set the app-name user can add `APP_NAME`. For example: ```dotenv # configs/.env APP_NAME=test-service HTTP_PORT=8001 ``` ## Configuring Environments in GoFr GoFr uses an environment variable, `APP_ENV`, to determine the application's current environment. This variable also guides GoFr to load the corresponding environment file. ### Example: GoFr always loads `configs/.env` first (if present) as the base, then overlays `configs/..env` on top. The overlay file's values override matching keys from `.env`; keys not set in the overlay continue to come from `.env`. If `APP_ENV` is unset, GoFr overlays `configs/.local.env` instead. System environment variables take precedence over both files. For example, with `APP_ENV=dev` GoFr loads `configs/.env` and then overlays `configs/.dev.env`. Both files are loaded if both exist — the overlay does not replace `.env` wholesale. _For example, to run the application in the `dev` environment, use the following command:_ ```bash APP_ENV=dev go run main.go ``` This approach ensures that the correct configurations are used for each environment, providing flexibility and control over the application's behavior in different contexts. --- ## https://gofr.dev/docs/quick-start/connecting-mysql # Connecting to MySQL Just like Redis, GoFr supports connection to various SQL-compatible databases (MySQL, MariaDB, PostgreSQL, and Supabase) based on configuration variables. ## MySQL/MariaDB ### Setup Users can run MySQL/MariaDB and create a database locally using the following Docker command: ```bash docker run --name gofr-mysql -e MYSQL_ROOT_PASSWORD=root123 -e MYSQL_DATABASE=test_db -p 3306:3306 -d mysql:8.0.30 ``` MySQL takes ~10–15 seconds to bootstrap. Wait for it to be ready before running the next command: ```bash until docker exec gofr-mysql mysqladmin ping -uroot -proot123 --silent; do sleep 2; done ``` Access the `test_db` database and create a table customer with columns `id` and `name`. Change MySQL to MariaDB as needed: ```bash docker exec gofr-mysql mysql -uroot -proot123 test_db -e "CREATE TABLE customers (id INT AUTO_INCREMENT PRIMARY KEY, name VARCHAR(255) NOT NULL);" ``` Now that the database with the table is ready, we can connect our GoFr server to MySQL/MariaDB. ### Configuration & Usage After adding MySQL/MariaDB configs `.env` will be updated to the following. Use ```DB_DIALECT=mysql``` for both MySQL and MariaDB. ```dotenv # configs/.env APP_NAME=test-service HTTP_PORT=8000 REDIS_HOST=localhost REDIS_PORT=2002 REDIS_PASSWORD=password DB_HOST=localhost DB_USER=root DB_PASSWORD=root123 DB_NAME=test_db DB_PORT=3306 DB_DIALECT=mysql DB_CHARSET=utf8 #(optional) ``` ### TLS/SSL Configuration GoFr supports secure TLS connections to MySQL/MariaDB databases. Configure TLS by setting the `DB_SSL_MODE` environment variable and optionally providing certificate paths for enhanced security. #### Available SSL Modes | SSL Mode | Description | |----------|-------------| | `disable` | No TLS encryption (default) | | `preferred` | Attempts TLS, falls back to plain connection if unavailable | | `require` | Enforces TLS but skips certificate validation | | `skip-verify` | Enforces TLS without validating server certificate | | `verify-ca` | Enforces TLS and validates server certificate against CA | | `verify-full` | Enforces TLS with full certificate validation (including hostname) | #### TLS Environment Variables | Variable | Required | Description | |----------|----------|-------------| | `DB_SSL_MODE` | No | TLS mode (defaults to `disable`) | | `DB_TLS_CA_CERT` | Conditional | Path to CA certificate (required for `verify-ca`/`verify-full`) | | `DB_TLS_CLIENT_CERT` | No | Path to client certificate (for mutual TLS) | | `DB_TLS_CLIENT_KEY` | No | Path to client private key (for mutual TLS) | #### Example Configuration ```dotenv # configs/.env DB_HOST=localhost DB_USER=root DB_PASSWORD=root123 DB_NAME=test_db DB_PORT=3306 DB_DIALECT=mysql # Basic TLS (no certificate validation) DB_SSL_MODE=require # OR with CA certificate validation (production) DB_SSL_MODE=verify-ca DB_TLS_CA_CERT=/path/to/ca-cert.pem # OR with mutual TLS (enhanced security) DB_SSL_MODE=verify-full DB_TLS_CA_CERT=/path/to/ca-cert.pem DB_TLS_CLIENT_CERT=/path/to/client-cert.pem DB_TLS_CLIENT_KEY=/path/to/client-key.pem ``` ## PostgreSQL ### Setup Users can run PostgreSQL and create a database locally using the following Docker command: ```bash docker run --name gofr-postgres -e POSTGRES_PASSWORD=postgres -e POSTGRES_DB=test_db -p 5432:5432 -d postgres:14 ``` Access `test_db` database and create a table customer with columns `id` and `name`: ```bash docker exec gofr-postgres psql -U postgres test_db -c "CREATE TABLE customers (id SERIAL PRIMARY KEY, name VARCHAR(255) NOT NULL);" ``` ### Configuration & Usage After adding PostgreSQL configs, `.env` will be updated to the following: ```dotenv # configs/.env APP_NAME=test-service HTTP_PORT=8000 REDIS_HOST=localhost REDIS_PORT=2002 REDIS_PASSWORD=password DB_HOST=localhost DB_USER=postgres DB_PASSWORD=postgres DB_NAME=test_db DB_PORT=5432 DB_DIALECT=postgres DB_SSL_MODE=disable #(optional, defaults to disable) ``` ## Supabase [Supabase](https://supabase.com) is an open-source Firebase alternative that provides a PostgreSQL database with additional features. GoFr supports connecting to Supabase databases with specialized configuration. ### Setup To use Supabase with GoFr: 1. Sign up for a [Supabase account](https://supabase.com) 2. Create a new project 3. Get your connection information from the Supabase dashboard: - Project Reference ID - Database Password - Region (for pooled connections) ### Configuration & Usage GoFr provides three connection types for Supabase: 1. **Direct Connection**: Standard connection to the database 2. **Session Pooler**: Connection via Supabase's connection pooler (maintains session variables) 3. **Transaction Pooler**: Connection via Supabase's transaction pooler (resets session variables) Add Supabase configuration to your `.env` file: ```dotenv # configs/.env APP_NAME=test-service HTTP_PORT=8000 # Supabase configuration DB_DIALECT=supabase DB_USER=postgres DB_PASSWORD=your_database_password DB_NAME=postgres DB_PORT=5432 # Optional, defaults based on connection type DB_SSL_MODE=require # Optional, always forced to "require" for Supabase # Supabase-specific configs SUPABASE_PROJECT_REF=your_project_ref_id SUPABASE_CONNECTION_TYPE=direct # Options: direct, session, transaction SUPABASE_REGION=us-east-1 # Required for pooled connections ``` Alternatively, you can provide a full connection string: ```dotenv DB_DIALECT=supabase DB_URL=postgresql://postgres:your_password@db.your_project_ref.supabase.co:5432/postgres ``` #### Connection Types - **Direct** (`SUPABASE_CONNECTION_TYPE=direct`): Connects directly to your database at `db.[PROJECT_REF].supabase.co:5432` - **Session Pooler** (`SUPABASE_CONNECTION_TYPE=session`): Uses Supabase's connection pooler at `aws-0-[REGION].pooler.supabase.co:5432` - **Transaction Pooler** (`SUPABASE_CONNECTION_TYPE=transaction`): Uses Supabase's transaction pooler at `aws-0-[REGION].pooler.supabase.co:6543` **Note:** For pooled connections, the `SUPABASE_REGION` parameter is required. ## Database Usage Example For all supported SQL databases, GoFr provides a consistent API to interact with your data. Now, in the following example, we'll store customer data using **POST** `/customer` and then use **GET** `/customer` to retrieve the same. We will be storing the customer data with `id` and `name`. After adding code to add and retrieve data from the SQL datastore, `main.go` will be updated to the following. ```go package main import ( "errors" "github.com/redis/go-redis/v9" "gofr.dev/pkg/gofr" ) type Customer struct { ID int `json:"id"` Name string `json:"name"` } func main() { // initialize gofr object app := gofr.New() app.GET("/redis", func(ctx *gofr.Context) (any, error) { // Get the value using the Redis instance val, err := ctx.Redis.Get(ctx.Context, "test").Result() if err != nil && !errors.Is(err, redis.Nil) { // If the key is not found, we are not considering this an error and returning "" return nil, err } return val, nil }) app.POST("/customer/{name}", func(ctx *gofr.Context) (any, error) { name := ctx.PathParam("name") // Inserting a customer row in database using SQL _, err := ctx.SQL.ExecContext(ctx, "INSERT INTO customers (name) VALUES (?)", name) return nil, err }) app.GET("/customer", func(ctx *gofr.Context) (any, error) { var customers []Customer // Getting the customer from the database using SQL rows, err := ctx.SQL.QueryContext(ctx, "SELECT * FROM customers") if err != nil { return nil, err } for rows.Next() { var customer Customer if err := rows.Scan(&customer.ID, &customer.Name); err != nil { return nil, err } customers = append(customers, customer) } // return the customer return customers, nil }) app.Run() } ``` To update the database with the customer data access, use this curl command through the terminal ```bash # here abc and xyz after /customer are the path parameters curl --location --request POST 'http://localhost:9000/customer/abc' curl --location --request POST 'http://localhost:9000/customer/xyz' ``` Now when we access {% new-tab-link title="http://localhost:9000/customer" href="http://localhost:9000/customer" /%} we should see the following output: ```json { "data": [ { "id": 1, "name": "abc" }, { "id": 2, "name": "xyz" } ] } ``` **Note:** When using PostgreSQL or Supabase, you may need to use `$1` instead of `?` in SQL queries, depending on your driver configuration. ## Enabling Read/Write Splitting in MySQL (DBResolver) GoFr provides built-in support for read/write splitting using its `DBRESOLVER` module for **MySQL**. This feature automatically routes requests to the **primary database** or **read replicas** based on: - **HTTP Method**: - Write operations (`POST`, `PUT`, `PATCH`, `DELETE`) → Primary - Read operations (`GET`, `HEAD`, `OPTIONS`) → Replicas - **Route Configuration**: Force specific routes to always use the primary database for strong consistency ### Installation Import the GoFr's dbresolver for MySQL: ```shell go get gofr.dev/pkg/gofr/datasource/dbresolver@latest ``` ### Configuration **1. Environment Variables** Configure the primary database in your .env file: ```editorconfig # Primary database DB_HOST=localhost DB_PORT=3306 DB_USER=root DB_PASSWORD=root123 DB_NAME=test_db DB_DIALECT=mysql ``` **2. Initialize DBResolver** After importing the package, you can configure the DBResolver in your GoFr application using the `AddDBResolver` method. You can choose the load balancing strategy and enable fallback to primary: ```go package main import ( "gofr.dev/pkg/gofr" "gofr.dev/pkg/gofr/datasource/dbresolver" ) type Customer struct { ID int `db:"id"` Name string `db:"name"` } func main() { a := gofr.New() // Initialize DB resolver with default settings err := dbresolver.InitDBResolver(a, &dbresolver.Config{ Strategy: dbresolver.StrategyRoundRobin, // use round-robin strategy or random strategy ReadFallback: true, // allow reads on primary if all replicas are down MaxFailures: 3, // number of allowed failures before marking a replica as down TimeoutSec: 30, // timeout for marking a replica as down PrimaryRoutes: []string{"/admin", "/api/payments/*"}, Replicas: []dbresolver.ReplicaCredential{ { Host: "localhost:3307", User: "replica_user1", Password: "pass1", }, { Host: "replica2.example.com:3308", User: "replica_user2", Password: "pass2", }, { Host: "replica3.example.com:3309", User: "replica_user3", Password: "pass3", }, },// routes that should go to primary }) if err != nil { a.Logger().Errorf("failed to initialize db resolver: %v", err) } // Read endpoint - goes to replica a.GET("/customers", func(c *gofr.Context) (interface{}, error) { var customers []Customer c.SQL.Select(c, &customers, "SELECT id, name FROM customers") return customers, err }) // Write endpoint - goes to primary a.POST("/customers", func(c *gofr.Context) (interface{}, error) { var customer Customer c.Bind(&customer) _, err := c.SQL.Exec("INSERT INTO customers (name) VALUES (?)", customer.Name) return customer, err }) // Admin endpoint - forced to primary a.GET("/admin/customers", func(c *gofr.Context) (interface{}, error) { var customers []Customer c.SQL.Select(c, &customers, "SELECT id, name FROM customers") return customers, err }) a.Run() } ``` **3. Connection Pool Tuning (Optional)** By default, replica pools are auto-scaled based on primary settings: ```editorconfig # Defaults (automatically calculated) DB_MAX_IDLE_CONNECTION=2 → Replicas: 8 (2 × 4) DB_MAX_OPEN_CONNECTION=20 → Replicas: 40 (20 × 2) ``` Override with: ```editorconfig DB_REPLICA_MAX_IDLE_CAP=100 DB_REPLICA_MIN_IDLE=5 DB_REPLICA_DEFAULT_IDLE=15 DB_REPLICA_MAX_OPEN_CAP=500 DB_REPLICA_MIN_OPEN=20 DB_REPLICA_DEFAULT_OPEN=150 ``` **Benefits** - Performance: Offloads read traffic from the primary, reducing latency. - Scalability: Easily scale reads by adding more replicas. - Resilience: Ensures high availability through automatic fallback. --- ## https://gofr.dev/docs/quick-start/connecting-redis # Connecting to Redis GoFr simplifies the process of connecting to Redis. ## Setup: Ensure we have Redis installed on our system. Optionally, we can use Docker to set up a development environment with password authentication as described below. ```bash docker run --name gofr-redis -p 2002:6379 -d \ -e REDIS_PASSWORD=password \ redis:7.0.5 --requirepass password ``` We can set a sample key `greeting` using the following command: ```bash docker exec gofr-redis redis-cli -a password SET greeting "Hello from Redis." ``` ## Configuration & Usage: GoFr applications rely on environment variables to configure and connect to a Redis server. These variables are stored in a `.env` file located within the `configs` directory at your project root. ### Required Environment Variables: {% table %} - Key - Description --- - REDIS_HOST - Hostname or IP address of your Redis server --- - REDIS_PORT - Port number your Redis server listens on (default: `6379`) --- - REDIS_USER - Redis username; multiple users with ACLs can be configured. [See official docs](https://redis.io/docs/latest/operate/oss_and_stack/management/security/acl/) --- - REDIS_PASSWORD - Redis password (required only if authentication is enabled) --- - REDIS_DB - Redis database number (default: `0`) --- {% /table %} ## TLS Support (Optional): {% table %} - Key - Description --- - REDIS_TLS_ENABLED - Set to `"true"` to enable TLS --- - REDIS_TLS_CA_CERT_PATH - File path to the CA certificate used to verify the Redis server --- - REDIS_TLS_CERT_PATH - File path to the client certificate (for mTLS) --- - REDIS_TLS_KEY_PATH - File path to the client private key (for mTLS) --- {% /table %} ## Example `.env` File ```env REDIS_HOST=redis.example.com REDIS_PORT=6379 REDIS_USER=appuser REDIS_PASSWORD=securepassword REDIS_DB=0 # TLS settings (optional) REDIS_TLS_ENABLED=true REDIS_TLS_CA_CERT_PATH=./configs/certs/ca.pem REDIS_TLS_CERT_PATH=./configs/certs/client.crt REDIS_TLS_KEY_PATH=./configs/certs/client.key ``` The following code snippet demonstrates how to retrieve data from a Redis key named "greeting": ```go package main import ( "errors" "github.com/redis/go-redis/v9" "gofr.dev/pkg/gofr" ) func main() { // Initialize GoFr object app := gofr.New() app.GET("/redis", func(ctx *gofr.Context) (any, error) { // Get the value using the Redis instance val, err := ctx.Redis.Get(ctx.Context, "greeting").Result() if err != nil && !errors.Is(err, redis.Nil) { // If the key is not found, we are not considering this an error and returning "" return nil, err } return val, nil }) // Run the application app.Run() } ``` --- ## https://gofr.dev/docs/quick-start/introduction # Hello, GoFr {% answer %} GoFr is an opinionated Go framework for production microservices. The fastest path to a running service is: `go mod init`, `go get gofr.dev`, then a `main.go` that calls `gofr.New()` and registers a handler with `app.GET("/greet", handler)`. The framework wires HTTP routing, structured logging, OpenTelemetry traces, Prometheus metrics, datasource clients, and graceful shutdown automatically — no extra setup. {% /answer %} GoFr is an opinionated Go framework for production microservices. It bundles HTTP routing, structured logging, OpenTelemetry traces, Prometheus metrics, datasource clients, and graceful shutdown so you can focus on handler logic instead of plumbing. This page gets you from `go mod init` to a running, observable HTTP server in under five minutes. ## Prerequisites - Go 1.25 or above. Check with `go version`. - Familiarity with Go syntax — the {% new-tab-link title="Golang Tour" href="https://tour.golang.org/" /%} is a good 30-minute primer if you're new. ## Write your first GoFr API Let's start by initializing the {% new-tab-link title="go module" href="https://go.dev/ref/mod" /%} by using the following command. ```bash go mod init github.com/example ``` Add {% new-tab-link title="gofr" href="https://github.com/gofr-dev/gofr" /%} package to the project using the following command. ```bash go get gofr.dev ``` This code snippet showcases the creation of a simple GoFr application that defines a route and serves a response. You can add this code to your main.go file. ```go package main import "gofr.dev/pkg/gofr" func main() { // initialize gofr object app := gofr.New() // register route greet app.GET("/greet", func(ctx *gofr.Context) (any, error) { return "Hello World!", nil }) // Runs the server, it will listen on the default port 8000. // it can be over-ridden through configs app.Run() } ``` Before starting the server, run the following command in your terminal to ensure you have downloaded and synchronized all required dependencies for your project. ```bash go mod tidy ``` Once the dependencies are synchronized, start the GoFr server using the following command: ```bash go run main.go ``` This would start the server at 8000 port, `/greet` endpoint can be accessed from your browser at {% new-tab-link title="http://localhost:8000/greet" href="http://localhost:8000/greet" /%}, you would be able to see the output as following with _Status Code 200_ as per REST Standard. ```json {"data":"Hello World!"} ``` ## Understanding the example The `hello-world` server involves three essential steps: 1. **Creating GoFr Server:** When `gofr.New()` is called, it initializes the framework and handles various setup tasks like initializing logger, metrics, datasources, etc., based on the configs. _This single line is a standard part of all GoFr servers._ 2. **Attaching a Handler to a Path:** In this step, the server is instructed to associate an HTTP request with a specific handler function. This is achieved through `app.GET("/greet", HandlerFunction)`, where _GET /greet_ maps to HandlerFunction. Likewise, `app.POST("/todo", ToDoCreationHandler)` links a _POST_ request to the `/todo` endpoint with _ToDoCreationHandler_. **Good To Know** > In Go, functions are first-class citizens, allowing easy handler definition and reference. > HTTP Handler functions should follow the `func(ctx *gofr.Context) (any, error)` signature. > They take a context as input, returning two values: the response data and an error (set to `nil` when there is no error). GoFr {% new-tab-link newtab=false title="context" href="/docs/references/context" /%} `ctx *gofr.Context` serves as a wrapper for requests, responses, and dependencies, providing various functionalities. 3. **Starting the server** When `app.Run()` is called, it configures, initiates, and runs the HTTP server, middlewares. It manages essential features such as routes for health check endpoints, metrics server, favicon etc. It starts the server on the default port 8000. ## Default ports and endpoints Out of the box, `app.Run()` opens two listeners (a third is added when you use gRPC). If any of these ports are taken on your machine, GoFr will fail to start — set the matching env var in `configs/.env` to override. | Server | Default port | Override env var | Endpoints exposed | |---|---|---|---| | **HTTP** | `8000` | `HTTP_PORT` | Your routes, plus `/.well-known/health`, `/.well-known/alive`, `/.well-known/swagger`, `/favicon.ico` (and `/.well-known/graphql/ui` if GraphQL is enabled). | | **Metrics** (Prometheus) | `2121` | `METRICS_PORT` (set to `0` to disable) | `/metrics` (Prometheus exposition format). Scraped by Prometheus / kube-prometheus-stack. | | **gRPC** | `9000` | `GRPC_PORT` | Your registered gRPC services. Only opened if you call `app.RegisterService(...)`. | So a fresh `app := gofr.New(); app.Run()` is reachable at: - `http://localhost:8000/` - `http://localhost:8000/.well-known/alive` → `200 OK` (use this for K8s liveness probes) - `http://localhost:8000/.well-known/health` → JSON status of registered datasources (use this for readiness probes) - `http://localhost:2121/metrics` → Prometheus metrics All `/.well-known/*` paths are auth-exempt by default, so health probes don't need credentials. For the full list of configurable env vars, see [GoFr Configuration Options](/docs/references/configs). --- ## https://gofr.dev/docs/quick-start/observability # Observability GoFr, by default, manages observability in different ways once the server starts: ## Logs Logs offer real-time information, providing valuable insights and immediate visibility into the ongoing state and activities of the system. It helps in identifying errors, debugging and troubleshooting, monitor performance, analyzing application usage, communications etc. GoFr logger allows customizing the log level, which provides flexibility to adjust logs based on specific needs. Logs are generated only for events equal to or above the specified log level; by default, GoFr logs at _INFO_ level. Log Level can be changed by setting the environment variable `LOG_LEVEL` value to _DEBUG, INFO, NOTICE, WARN, ERROR or FATAL_. When the GoFr server runs, it prints a log for reading configs, database connection, requests, database queries, missing configs, etc. They contain information such as request's correlation ID, status codes, request time, etc. ### Log Levels #### DEBUG This is the lowest priority level. It represents the most detailed/granular information. **Note:** `DEBUG` logs should be enabled only in development or controlled troubleshooting scenarios.They are typically disabled in production environments due to performance overhead and security risks. **Example** ```Go ctx.Debug("Calc trace - Price:", 150, "Discount:", 0.2, "Tax Multiplier:", 1.05) ``` --- #### INFO `INFO` Represents normal operational events during application execution and acts as the default logging level, ensuring baseline observability without excessive verbosity. **Example** ```Go ctx.Info("Application configuration loaded", "Source", "env") ``` --- #### NOTICE A level higher than `INFO` but lower than `WARN`. It shares the same visual prominence as a Warning but implies a "normal" condition rather than a problem. In simple words, it's used for events that are normal but rare and significant. **Example** ```Go ctx.Notice("Configuration hot-reload triggered by system admin") ``` --- #### WARN `WARN` should represent abnormal runtime conditions that indicate instability or degraded operation (retries, fallbacks, transient failures), not long-term code hygiene issues like deprecated API usage. If something would show up repeatedly in a healthy system, it shouldn’t be a `WARN`, otherwise the signal gets diluted and operators start ignoring it. **Example** ```Go ctx.Warn("Database connection timeout. Retrying...", "attempt", 1, "retry_after", "2s") ``` --- #### ERROR Indicates a failure event. This level routes logs to `stderr` (Standard Error), ensuring visibility to error tracking tools. **Example** ```Go ctx.Error("DB Query Timeout: Analytics fetch failed.", "error", errors.New("query execution exceeded 3000ms")) ``` --- #### FATAL The highest priority level. `FATAL` represents a critical system failures where the application cannot function. **Note:** `FATAL` terminates the process immediately and is intended only for startup-time failures, not runtime request handling. **Example** ```Go app.Logger().Fatal("Startup Failure: Mandatory SSL certificate missing.", "path", "/etc/certs/server.crt") ``` --- > **Note:** Performance & Log Volume. >1. Early Exit Optimization: The logger implements an "Early Exit" strategy. If the incoming log level is lower than the configured `LOG_LEVEL`, the function returns immediately before performing any formatting or allocation. >2. Locking Overhead: The terminal output utilizes a mutex lock to ensure thread safety. --- {% figure src="/quick-start-logs.png" alt="Pretty Printed Logs" /%} Logs are well-structured, they are of type JSON when exported to a file, such that they can be pushed to logging systems such as {% new-tab-link title="Loki" href="https://grafana.com/oss/loki/" /%}, Elasticsearch, etc. ## Metrics Metrics enable performance monitoring by providing insights into response times, latency, throughput, resource utilization, tracking CPU, memory, and disk I/O consumption across services, facilitating capacity planning and scalability efforts. Metrics play a pivotal role in fault detection and troubleshooting, offering visibility into system behavior. They are instrumental in measuring and meeting service-level agreements (SLAs) to ensure expected performance and reliability. GoFr publishes metrics to port: _2121_ on _/metrics_ endpoint in Prometheus format. ### Default Metrics {% table %} - Name - Type - Description --- - app_go_numGC - gauge - Number of completed Garbage Collector cycles --- - app_go_routines - gauge - Number of Go routines running --- - app_go_sys - gauge - Number of total bytes of memory --- - app_sys_memory_alloc - gauge - Number of bytes allocated for heap objects --- - app_sys_total_alloc - gauge - Number of cumulative bytes allocated for heap objects --- - app_info - gauge - Number of instances running with info of app and framework --- - app_http_response - histogram - Response time of HTTP requests in seconds --- - app_http_service_response - histogram - Response time of HTTP service requests in seconds --- - app_sql_open_connections - gauge - Number of open SQL connections --- - app_sql_inUse_connections - gauge - Number of inUse SQL connections --- - app_sql_stats - histogram - Response time of SQL queries in milliseconds --- - app_redis_stats - histogram - Response time of Redis commands in milliseconds --- - app_pubsub_publish_total_count - counter - Number of total publish operations --- - app_pubsub_publish_success_count - counter - Number of successful publish operations --- - app_pubsub_subscribe_total_count - counter - Number of total subscribe operations --- - app_pubsub_subscribe_success_count - counter - Number of successful subscribe operations --- - app_http_retry_count - counter - Total number of retry events --- - app_http_circuit_breaker_state - gauge - Current state of the circuit breaker (0 for Closed, 1 for Open). Used for historical timeline visualization. --- - app_graphql_operations_total - counter - Total number of GraphQL operations received. Labels: `operation_name`, `type`. --- - app_graphql_error_total - counter - Total number of GraphQL operations that returned an error. Labels: `operation_name`, `type`. --- - app_graphql_request_duration - histogram - Response time of GraphQL requests in seconds. Labels: `operation_name`, `type`, `status`. --- - app_cron_job_total - counter - Total number of cron job executions. Label: `job`. --- - app_cron_job_success - counter - Number of successful cron job executions. Label: `job`. --- - app_cron_job_failures - counter - Number of failed cron job executions. Label: `job`. --- - app_cron_job_duration - histogram - Duration of cron job execution in seconds. Label: `job`. {% /table %} For example: When running the application locally, we can access the /metrics endpoint on port 2121 from: {% new-tab-link title="http://localhost:2121/metrics" href="http://localhost:2121/metrics" /%} GoFr also supports creating {% new-tab-link newtab=false title="custom metrics" href="/docs/advanced-guide/publishing-custom-metrics" /%}. ### Disabling the Metrics Server To disable the metrics server entirely, set the `METRICS_PORT` environment variable to `0`: ```dotenv METRICS_PORT=0 ``` ### Example Dashboard These metrics can be easily consumed by monitoring systems like {% new-tab-link title="Prometheus" href="https://prometheus.io/" /%} and visualized in dashboards using tools like {% new-tab-link title="Grafana" href="https://grafana.com/" /%}. You can find the dashboard source in the {% new-tab-link title="GoFr repository" href="https://github.com/gofr-dev/gofr/tree/main/examples/http-server/docker/provisioning/dashboards/gofr-dashboard" /%}. {% figure src="/metrics-dashboard.png" alt="Grafana Dashboard showing GoFr metrics including HTTP request rates, response times, etc." caption="Example monitoring dashboard using GoFr's built-in metrics" /%} ## Tracing {% new-tab-link title="Tracing" href="https://opentelemetry.io/docs/concepts/signals/#traces" /%} is a powerful tool for gaining insights into your application's behavior, identifying bottlenecks, and improving system performance. A trace is a tree of spans. It is a collective of observable signals showing the path of work through a system. A trace on its own is distinguishable by a `TraceID`. In complex distributed systems, understanding how requests flow through the system is crucial for troubleshooting performance issues and identifying bottlenecks. Traditional logging approaches often fall short, providing limited visibility into the intricate interactions between components. ### Automated Tracing in GoFr GoFr automatically exports traces for all requests and responses. GoFr uses {% new-tab-link title="OpenTelemetry" href="https://opentelemetry.io/docs/concepts/what-is-opentelemetry/" /%} , a popular tracing framework, to automatically add traces to all requests and responses. **Automatic Correlation ID Propagation:** When a request enters your GoFr application, GoFr automatically generates a correlation-ID `X-Correlation-ID` and adds it to the response headers. This correlation ID is then propagated to all downstream requests. This means that user can track a request as it travels through your distributed system by simply looking at the correlation ID in the request headers. **Pub/Sub Tracing:** Tracing automatically extends across pub/sub boundaries. Every `Publish` injects the active trace context into the outgoing message (Kafka headers, NATS headers, Google Pub/Sub / SQS attributes, etc.) and every `Subscribe` extracts it to start the consumer span as a child of the producer — so an `HTTP → publish → subscribe → publish → subscribe` flow shows up as one connected trace. See {% new-tab-link newtab=false title="Publisher Subscriber → Distributed Tracing" href="/docs/advanced-guide/using-publisher-subscriber#distributed-tracing" /%} for details. ### Configuration & Usage: GoFr has support for following trace-exporters. Pick **one** — don't run all of them at once. #### 1. [OpenTelemetry Protocol](https://opentelemetry.io/docs/specs/otlp/) (recommended): OTLP over gRPC is the current OpenTelemetry standard and works with any OTLP-compatible backend (Jaeger 1.35+, Tempo, Honeycomb, the OpenTelemetry Collector, etc.). Add OTLP configs in `.env` file, your `.env` will be updated to: ```dotenv APP_NAME=test-service HTTP_PORT=8000 REDIS_HOST=localhost REDIS_PORT=2002 REDIS_PASSWORD=password DB_HOST=localhost DB_USER=root DB_PASSWORD=root123 DB_NAME=test_db DB_PORT=3306 # tracing configs TRACE_EXPORTER=otlp TRACER_URL=localhost:4317 TRACER_RATIO=1.0 LOG_LEVEL=DEBUG ``` > [!NOTE] > `TRACER_RATIO=1.0` samples 100% of traces — convenient for local development so a single `curl` produces a visible trace. In production lower this (e.g. `0.05` for 5%) to keep export volume manageable. #### 2. [Jaeger](https://www.jaegertracing.io/): To see the traces, install Jaeger image using the following Docker command: ```bash docker run -d --name jaeger \ -e COLLECTOR_OTLP_ENABLED=true \ -p 16686:16686 \ -p 14317:4317 \ -p 14318:4318 \ jaegertracing/all-in-one:1.41 ``` Add Jaeger Tracer configs in `.env` file, your `.env` will be updated to: ```dotenv # ... no change in other env variables # tracing configs TRACE_EXPORTER=jaeger TRACER_URL=localhost:14317 TRACER_RATIO=1.0 ``` Open {% new-tab-link title="jaeger" href="http://localhost:16686/trace/" /%} and search by TraceID (correlationID) to see the trace. {% figure src="/jaeger-traces.png" alt="Jaeger traces" /%} #### 3. [Zipkin](https://zipkin.io/) (legacy): > [!NOTE] > `TRACE_EXPORTER=zipkin` is deprecated and will be removed in a future release. Zipkin v2.24+ supports OTLP natively — prefer the OTLP exporter above. The Zipkin exporter remains for users on older Zipkin servers. To see the traces install zipkin image using the following Docker command: ```bash docker run --name gofr-zipkin -p 2005:9411 -d openzipkin/zipkin:latest ``` Add Tracer configs in `.env` file: ```dotenv # ... no change in other env variables # tracing configs TRACE_EXPORTER=zipkin TRACER_URL=http://localhost:2005/api/v2/spans TRACER_RATIO=1.0 ``` Open {% new-tab-link title="zipkin" href="http://localhost:2005/zipkin/" /%} and search by TraceID (correlationID) to see the trace. {% figure src="/quick-start-trace.png" alt="Zipkin traces" /%} #### 4. [GoFr Tracer](https://tracer.gofr.dev/): GoFr tracer is GoFr's own custom trace exporter as well as collector. Users can search a trace by its TraceID (correlationID) in GoFr's own tracer service, available anywhere, anytime. Add GoFr Tracer configs in `.env` file, your .env will be updated to ```dotenv # ... no change in other env variables # tracing configs TRACE_EXPORTER=gofr TRACER_RATIO=0.1 ``` > [!NOTE] > `TRACER_RATIO` refers to the proportion of traces that are exported through sampling. It ranges between 0 and 1. By default, this ratio is set to 1, meaning all traces are exported. > > Open {% new-tab-link title="gofr-tracer" href="https://tracer.gofr.dev/" /%} and search by TraceID (correlationID) to see the trace. ### Custom Authentication Headers Many observability platforms require custom headers for authentication. GoFr supports this through the `TRACER_HEADERS` configuration, which accepts comma-separated `key=value` pairs following the OpenTelemetry standard format. #### Usage Examples **Single Header:** ```dotenv # Honeycomb TRACER_HEADERS="X-Honeycomb-Team=your_api_key" ``` **Multiple Headers:** ```dotenv # Grafana Cloud with multiple headers TRACER_HEADERS="Authorization=Basic base64encodedcreds,X-Scope-OrgID=tenant-1" ``` ```dotenv # API key with special characters TRACER_HEADERS="X-Api-Key=secret123,Authorization=Bearer token" ``` #### Configuration Example Here's an example for sending traces to Grafana Cloud with authentication: ```dotenv APP_NAME=my-service # Grafana Cloud OTLP endpoint with authentication TRACE_EXPORTER=otlp TRACER_URL=otlp-gateway-prod-us-east-0.grafana.net:443 TRACER_HEADERS="Authorization=Basic dXNlcm5hbWU6cGFzc3dvcmQ=,X-Scope-OrgID=123456" TRACER_RATIO=1.0 ``` --- # Advanced Guide ## https://gofr.dev/docs/advanced-guide/authentication # Authentication Authentication is a crucial aspect of web applications, controlling access to resources based on user roles or permissions. It is the process of verifying a user's identity to grant access to protected resources. It ensures that only authenticated users can perform actions or access data within an application. GoFr offers a **Unified Authentication** model, meaning that once you enable an authentication method, it automatically applies to both your HTTP and gRPC services. ## Exempted Paths By default, the authentication middleware exempts every path under `/.well-known/*` from authentication (the middleware's `isWellKnown` check uses `strings.HasPrefix(path, "/.well-known")`). This includes: - `/.well-known/alive` — liveness probe - `/.well-known/health` — readiness/health probe with datasource status - `/.well-known/swagger` — Swagger UI (when an `openapi.json` is present in `static/`) If `/.well-known/health` exposes sensitive details about your datasources, scope its visibility at the network layer (private listener, mesh policy, or ingress rule) rather than expecting the auth middleware to gate it — the framework will not. ## 1. Basic Auth *Basic Authentication* is a simple authentication scheme where the user's credentials (username and password) are transmitted in the request header in a Base64-encoded format. Basic auth is the simplest way to authenticate your APIs. It's built on {% new-tab-link title="HTTP protocol authentication scheme" href="https://datatracker.ietf.org/doc/html/rfc7617" /%}. It involves sending the prefix `Basic` trailed by the Base64-encoded `:` within the standard `Authorization` header. ### Usage in GoFr GoFr offers two ways to implement basic authentication: **1. Predefined Credentials** Use `EnableBasicAuth(username, password)` to configure GoFr with pre-defined credentials. ```go func main() { app := gofr.New() app.EnableBasicAuth("admin", "secret_password") // Replace with your credentials app.GET("/protected-resource", func(c *gofr.Context) (any, error) { return "Success", nil }) app.Run() } ``` **2. Custom Validation Function** Use `EnableBasicAuthWithValidator(validationFunc)` to implement your own validation logic for credentials. The `validationFunc` takes the username and password as arguments and returns true if valid, false otherwise. ```go func validateUser(c *container.Container, username, password string) bool { // Implement your credential validation logic here return username == "john" && password == "doe123" } func main() { app := gofr.New() app.EnableBasicAuthWithValidator(validateUser) app.Run() } ``` ## 2. API Keys Auth *API Key Authentication* is an authentication scheme where a unique API key is included in the request header `X-Api-Key` for validation against a store of authorized keys. ### Usage in GoFr GoFr offers two ways to implement API Keys authentication. **1. Framework Default Validation** - GoFr's default validation can be selected using **_EnableAPIKeyAuth(apiKeys ...string)_** ```go func main() { app := gofr.New() app.EnableAPIKeyAuth("9221e451-451f-4cd6-a23d-2b2d3adea9cf", "0d98ecfe-4677-48aa-b463-d43505766915") app.Run() } ``` **2. Custom Validation Function** - GoFr allows a custom validator function for validating APIKeys using **_EnableAPIKeyAuthWithValidator(validator)_** ```go func apiKeyValidator(c *container.Container, apiKey string) bool { validKeys := []string{"f0e1dffd-0ff0-4ac8-92a3-22d44a1464e4"} return slices.Contains(validKeys, apiKey) } func main() { app := gofr.New() app.EnableAPIKeyAuthWithValidator(apiKeyValidator) app.Run() } ``` ## 3. OAuth 2.0 {% new-tab-link title="OAuth" href="https://www.rfc-editor.org/rfc/rfc6749" /%} 2.0 is the industry-standard protocol for authorization. It involves sending the prefix `Bearer` trailed by the encoded token within the standard `Authorization` header. ### Usage in GoFr Enable OAuth 2.0 to authenticate requests. Use `EnableOAuth(jwks-endpoint, refresh_interval, options ...jwt.ParserOption)` to configure GoFr. ```go func main() { app := gofr.New() app.EnableOAuth("http://jwks-endpoint", 3600) app.Run() } ``` ### Available JWT Claim Validations - **Expiration (`exp`)**: Validated by default if present. Use `jwt.WithExpirationRequired()` to make it mandatory. - **Audience (`aud`)**: `jwt.WithAudience("https://api.example.com")` - **Issuer (`iss`)**: `jwt.WithIssuer("https://auth.example.com")` - **Subject (`sub`)**: `jwt.WithSubject("user@example.com")` ## Accessing Auth Info in Handlers Once authenticated, you can retrieve the authentication information from the context using the `GetAuthInfo()` method. This works identically for both HTTP and gRPC handlers. ```go func MyHandler(ctx *gofr.Context) (any, error) { authInfo := ctx.GetAuthInfo() // For Basic Auth username := authInfo.GetUsername() // For API Key apiKey := authInfo.GetAPIKey() // For OAuth claims := authInfo.GetClaims() if claims != nil { // Access specific claims (typecasting is required for specific claim values) userID := claims["sub"].(string) } return "Success", nil } ``` ## Security Best Practices * **Timing Attacks**: GoFr's Basic Auth and API Key interceptors use `subtle.ConstantTimeCompare` to prevent timing attacks. * **TLS**: Always use TLS in production to encrypt the authentication credentials and tokens transmitted over the network. ## Related production guides - **Auth in Kubernetes**: [Manage JWT keys, OIDC, and auth secrets on K8s](/docs/guides/auth-in-kubernetes) — secret rotation and key distribution for production auth. --- ## https://gofr.dev/docs/advanced-guide/building-cli-applications # Building CLI Applications GoFr provides a simple way to build command-line applications using `gofr.NewCMD()`. This creates standalone CLI tools without starting an HTTP server. ## Configuration To configure logging for CLI applications, set the following environment variable: - `CMD_LOGS_FILE`: The file path where CLI logs will be written. If not set, logs are discarded. ## Getting Started Create a basic CLI application with subcommands: ```go package main import ( "fmt" "gofr.dev/pkg/gofr" ) func main() { app := gofr.NewCMD() // Simple hello command app.SubCommand("hello", func(c *gofr.Context) (any, error) { return "Hello World!", nil }, gofr.AddDescription("Print hello message")) // Command with parameters app.SubCommand("greet", func(c *gofr.Context) (any, error) { name := c.Param("name") if name == "" { name = "World" } return fmt.Sprintf("Hello, %s!", name), nil }) app.Run() } ``` ## Key GoFr CLI Methods - **`gofr.NewCMD()`**: Initialize a CLI application - **`app.SubCommand(name, handler, options...)`**: Add a subcommand - **`gofr.AddDescription(desc)`**: Add help description - **`gofr.AddHelp(help)`**: Add detailed help text - **`ctx.Param(name)`**: Get command parameters - **`ctx.Out.Println()`**: Print to stdout - **`ctx.Logger`**: Access logging ## Running CLI Applications Build and run your CLI: ```bash go build -o mycli ./mycli hello ./mycli greet --name John ./mycli --help ``` ## Example Commands ```bash # Basic command ./mycli hello # Output: Hello World! # Command with parameter ./mycli greet --name Alice # Output: Hello, Alice! # Help ./mycli --help ``` For more details, see the [sample-cmd example](https://github.com/gofr-dev/gofr/tree/main/examples/sample-cmd). --- ## https://gofr.dev/docs/advanced-guide/circuit-breaker # Circuit Breaker in HTTP Communication {% answer %} GoFr's `AddHTTPService` registers an instrumented HTTP client and accepts a `service.CircuitBreakerConfig`. Configure it with a failure threshold and a polling interval. When the failure count crosses the threshold, the breaker opens and short-circuits subsequent requests for fast failure; the polling interval drives recovery checks against the upstream's health endpoint so traffic resumes automatically once the dependency is healthy again. {% /answer %} Calls to remote resources and services can fail due to temporary issues like slow network connections or timeouts, service unavailability. While transient faults can be mitigated using the "Retry pattern", there are cases where continual retries are futile, such as during severe service failures. In such scenarios, it's crucial for applications to recognize when an operation is unlikely to succeed and handle the failure appropriately rather than persistently retrying. Indiscriminate use of HTTP retries can even lead to unintentional denial-of-service attacks within the software itself, as multiple clients may flood a failing service with retry attempts. To prevent this, a defense mechanism like the circuit breaker pattern is essential. Unlike the "Retry pattern" which aims to eventually succeed, the circuit breaker pattern focuses on preventing futile operations. While these patterns can be used together, it's vital for the retry logic to be aware of the circuit breaker's feedback and cease retries if the circuit breaker indicates a non-transient fault. GoFr inherently provides the functionality, it can be enabled by passing circuit breaker configs as options to `AddHTTPService()` method. ## How It Works: The circuit breaker tracks consecutive failed requests for a downstream service. - **Threshold:** The number of consecutive failed requests after which the circuit breaker transitions to an open state. While open, all requests to that service will fail immediately without making any actual outbound calls, effectively preventing request overflow to an already failing service. - **Interval:** Once the circuit is open, GoFr starts a background goroutine that periodically checks the health of the service by making requests to its aliveness endpoint (by default: `/.well-known/alive`) at the specified interval. When the service is deemed healthy again, the circuit breaker transitions directly from **Open** to **Closed**, allowing requests to resume. > GoFr's circuit breaker implementation does not use a **Half-Open** state. Instead, it relies on periodic asynchronous health checks to determine service recovery. ## Failure Conditions The Circuit Breaker counts a request as "failed" if: 1. An error occurs during the HTTP request (e.g., network timeout, connection refused). 2. The response status code is **greater than 500** (e.g., 502, 503, 504). > **Note:** HTTP 500 Internal Server Error is **NOT** counted as a failure for the circuit breaker. This distinguishes between application bugs (500) and service availability issues (> 500). ## Health Check Requirement For the Circuit Breaker to recover from an **Open** state, the downstream service **must** expose a health check endpoint that returns a `200 OK` status code. - **Default Endpoint:** `/.well-known/alive` - **Custom Endpoint:** Can be configured using `service.HealthConfig`. > [!WARNING] > If the downstream service does not have a valid health check endpoint (returns 404 or other errors), the Circuit Breaker will **never recover** and will remain permanently Open. Ensure your services implement the health endpoint correctly. ## Interaction with Retry When using both Retry and Circuit Breaker patterns, the **order of wrapping** is critical for effective resilience: - **Recommended: Retry as the Outer Layer** In this configuration, the `Retry` layer wraps the `Circuit Breaker`. Every single retry attempt is tracked by the circuit breaker. If a request retries 5 times, the circuit breaker sees 5 failures. This allows the circuit to trip quickly during a "retry storm," protecting the downstream service from excessive load. - **Non-Recommended: Circuit Breaker as the Outer Layer** If the `Circuit Breaker` wraps the `Retry` layer, it only sees the **final result** of the entire retry loop. Even if a request retries 10 times internally, the circuit breaker only counts it as **1 failure**. This delays the circuit's reaction and can lead to hundreds of futile calls hitting a failing service before the breaker finally trips. > [!IMPORTANT] > Always ensure `Retry` is the outer layer by providing the `CircuitBreakerConfig` **before** the `RetryConfig` in the `AddHTTPService` options. > NOTE: Retries only occur when the target service responds with a status code > 500 (e.g., 502 Bad Gateway, 503 Service Unavailable). 500 Internal Server Error and client errors (4xx) are considered non-transient or bug-related and will not trigger retries. ## Usage ```go package main import ( "time" "gofr.dev/pkg/gofr" "gofr.dev/pkg/gofr/service" ) func main() { // Create a new application app := gofr.New() app.AddHTTPService("order", "https://order-func", &service.CircuitBreakerConfig{ // Number of consecutive failed requests after which circuit breaker will be enabled Threshold: 4, // Time interval at which circuit breaker will hit the health endpoint. Interval: 1 * time.Second, }, ) app.GET("/order", Get) // Run the application app.Run() } ``` Circuit breaker state changes to open when number of consecutive failed requests increases the threshold. When it is in open state, GoFr makes request to the health endpoint (default being - /.well-known/alive, or the custom endpoint if configured) at an equal interval of time provided in config. GoFr publishes the following metric to track circuit breaker state: - `app_http_circuit_breaker_state`: Current state of the circuit breaker (0 for Closed, 1 for Open). This metric is used to visualize a historical timeline of circuit transitions on the dashboard. > ##### Check out the example of an inter-service HTTP communication along with circuit-breaker in GoFr: [Visit GitHub](https://github.com/gofr-dev/gofr/blob/main/examples/using-http-service/main.go) ## Related production guides - **Service Mesh Integration**: [Layering GoFr's circuit breaker with mesh-level retries and outlier detection](/docs/guides/service-mesh-integration) — choosing app vs. mesh policies. --- ## https://gofr.dev/docs/advanced-guide/custom-spans-in-tracing # Custom Spans In Tracing GoFr's built-in tracing provides valuable insights into application's behavior. However, sometimes we might need even more granular details about specific operations within your application. This is where `custom spans` can be used. ## How it helps? By adding custom spans in traces to our requests, we can: - **Gain granular insights:** Custom spans allows us to track specific operations or functions within your application, providing detailed performance data. - **Identify bottlenecks:** Analyzing custom spans helps to pinpoint areas of your code that may be causing performance bottlenecks or inefficiencies. - **Improve debugging:** Custom spans enhance the ability to debug issues by providing visibility into the execution flow of an application. ## Usage To add a custom trace to a request, GoFr context provides `Trace()` method, which takes the name of the span as an argument and returns a trace.Span. ```go import "gofr.dev/pkg/gofr" func MyHandler(c *gofr.Context) (any, error) { span := c.Trace("my-custom-span") defer span.End() // Do some work here return nil, nil } ``` In this example, **my-custom-span** is the name of the custom span that is added to the request. The defer statement ensures that the span is closed even if an error occurs to ensure that the trace is properly recorded. > ##### Check out the example of creating a custom span in GoFr: [Visit GitHub](https://github.com/gofr-dev/gofr/blob/main/examples/http-server/main.go#L58) ## Related production guides - **Production Tracing**: [Configure OpenTelemetry exporters and samplers](/docs/guides/production-tracing) — wire your custom spans into a real OTLP collector. - **Distributed Tracing**: [End-to-end traces across services](/docs/guides/distributed-tracing) — propagate trace context for inter-service spans. --- ## https://gofr.dev/docs/advanced-guide/dealing-with-sql # Dealing with SQL GoFr simplifies the process of connecting to SQL databases where one needs to add respective configs in .env, which allows connecting to different SQL dialects(MySQL, PostgreSQL, SQLite) without going into complexity of configuring connections. With GoFr, connecting to different SQL databases is as straightforward as setting the DB_DIALECT environment variable to the respective dialect. ## Usage for PostgreSQL and MySQL To connect with PostgreSQL, set `DB_DIALECT` to `postgres`. Similarly, To connect with MySQL, simply set `DB_DIALECT` to `mysql`. ```dotenv DB_HOST=localhost DB_USER=root DB_PASSWORD=root123 DB_NAME=test_db DB_PORT=3306 DB_DIALECT=postgres ``` ## Usage for SQLite To connect with SQLite, set `DB_DIALECT` to `sqlite` and `DB_NAME` to the name of your DB File. If the DB file already exists then it will be used otherwise a new one will be created. ```dotenv DB_NAME=test.db DB_DIALECT=sqlite ``` ## Setting Max open and Idle Connections To set max open and idle connection for any MySQL, PostgreSQL, SQLite. Add the following configs in `.env` file. ```dotenv DB_MAX_IDLE_CONNECTION=5 // Default 2 DB_MAX_OPEN_CONNECTION=5 // Default unlimited ``` > ##### Check out the example on how to add configuration for SQL in GoFr: [Visit GitHub](https://github.com/gofr-dev/gofr/blob/main/examples/http-server/configs/.env) ## Related production guides - **Connection Pooling**: [Tune `DB_MAX_OPEN_CONNECTION` and friends for production load](/docs/guides/connection-pooling) — sizing pools to match traffic and DB capacity. --- ## https://gofr.dev/docs/advanced-guide/debugging # Using `pprof` in GoFr Applications In GoFr applications, `pprof` profiling is automatically enabled. The profiling endpoints are served on the `METRICS_PORT`, which defaults to `2121` if not specified. This guide explains how to enable and use `pprof` in GoFr applications. --- ## Enabling `pprof` in GoFr ### Prerequisites Ensure the `METRICS_PORT` is set (default is `2121`): ```bash METRICS_PORT=2121 ``` GoFr automatically registers the following `pprof` routes: - `/debug/pprof/cmdline` - `/debug/pprof/profile` - `/debug/pprof/symbol` - `/debug/pprof/trace` - `/debug/pprof/` (index) --- ## Accessing `pprof` Endpoints Once `pprof` is enabled, you can access the profiling endpoints at `http://localhost:/debug/pprof/`. For example, if `METRICS_PORT` is `2121`, the endpoints will be available at: - `http://localhost:2121/debug/pprof/` ### Available Endpoints 1. **`/debug/pprof/cmdline`**: - Returns the command-line arguments of the running application. 2. **`/debug/pprof/profile`**: - Generates a CPU profile for the application. 3. **`/debug/pprof/symbol`**: - Resolves program counters into function names. 4. **`/debug/pprof/trace`**: - Captures an execution trace of the application. 5. **`/debug/pprof/` (index)**: - Provides an index page with links to all available profiling endpoints, including memory, goroutine, and blocking profiles. --- ## Collecting Profiling Data ### 1. **CPU Profiling** To collect a CPU profile: ```bash curl -o cpu.pprof http://localhost:2121/debug/pprof/profile ``` ### 2. **Memory Profiling** To collect a memory profile: ```bash curl -o mem.pprof http://localhost:2121/debug/pprof/heap ``` ### 3. **Goroutine Profiling** To collect information about running goroutines: ```bash curl -o goroutine.pprof http://localhost:2121/debug/pprof/goroutine ``` ### 4. **Execution Trace** To collect an execution trace: ```bash curl -o trace.out http://localhost:2121/debug/pprof/trace ``` --- ## Analyzing Profiling Data ### 1. Using go tool pprof To analyze CPU, memory, or goroutine profiles: ```bash go tool pprof ``` #### **`top`** Shows the functions consuming the most resources (e.g., CPU or memory). ```bash go tool pprof cpu.pprof (pprof) top ``` #### **`list`** Displays the source code of a specific function, along with resource usage. ```bash (pprof) list ``` Example: ```bash (pprof) list main.myFunction ``` #### **`web`** Generates a visual representation of the profile in your browser. This requires Graphviz to be installed. ```bash (pprof) web ``` ### 2. Using go tool trace To analyze execution traces: ```bash go tool trace trace.out ``` --- ## Example Workflow 1. **Set Environment Variables**: ```bash METRICS_PORT=2121 ``` 2. **Run Your GoFr Application**: ```bash go run main.go ``` 3. **Collect Profiling Data**: - Collect a CPU profile: ```bash curl -o cpu.pprof http://localhost:2121/debug/pprof/profile ``` - Collect a memory profile: ```bash curl -o mem.pprof http://localhost:2121/debug/pprof/heap ``` 4. **Analyze the Data**: - Analyze the CPU profile: ```bash go tool pprof cpu.pprof (pprof) top (pprof) list main.myFunction (pprof) web ``` - Analyze the memory profile: ```bash go tool pprof mem.pprof (pprof) top (pprof) list main.myFunction (pprof) web ``` --- ## References - [Go `pprof` Documentation](https://pkg.go.dev/net/http/pprof) - [Profiling Go Programs](https://blog.golang.org/profiling-go-programs) - [Go Execution Tracer](https://golang.org/doc/diagnostics.html#tracing) --- ## https://gofr.dev/docs/advanced-guide/gofr-errors # Error Handling GoFr provides a structured error handling approach to simplify error management in your applications. The errors package in GoFr provides functionality for handling errors in GoFr applications. It includes predefined HTTP and database errors, as well as the ability to create custom errors with additional context. ## Pre-defined HTTP Errors GoFr's `http` package offers several predefined error types to represent common HTTP error scenarios. These errors automatically handle HTTP status code selection. These include: {% table %} - Error Type - Description - Status Code --- - `ErrorInvalidParam` - Represents an error due to an invalid parameter - 400 (Bad Request) --- - `ErrorMissingParam` - Represents an error due to a missing parameter - 400 (Bad Request) --- - `ErrorEntityNotFound` - Represents an error due to a not found entity - 404 (Not Found) --- - `ErrorEntityAlreadyExist` - Represents an error due to creation of duplicate entity - 409 (Conflict) --- - `ErrorInvalidRoute` - Represents an error for invalid route - 404 (Not Found) --- - `ErrorRequestTimeout` - Represents an error for request which timed out - 408 (Request Timeout) --- - `ErrorPanicRecovery` - Represents an error for request which panicked - 500 (Internal Server Error) {% /table %} #### Usage: To use the predefined HTTP errors, users need to import the GoFr http package and can simply call them: ```go import "gofr.dev/pkg/gofr/http" err := http.ErrorMissingParam{Params: []string{"id"}} ``` ## Database Errors Database errors in GoFr, represented in the `datasource` package, encapsulate errors related to database operations such as database connection, query failure, availability etc. The `ErrorDB` struct can be used to populate `error` as well as any custom message to it. **Status Code: 500 (Internal Server Error)** #### Usage: ```go import "gofr.dev/pkg/gofr/datasource" // Creating a custom error wrapped in underlying error for database operations dbErr := datasource.ErrorDB{Err: err, Message: "error from sql db"} // Adding stack trace to the error dbErr = dbErr.WithStack() // Creating a custom error only with error message and no underlying error. dbErr2 := datasource.ErrorDB{Message : "database connection timed out!"} ``` ## Custom Errors GoFr's error structs implements an interface with `Error() string` and `StatusCode() int` methods, users can override the status code by implementing it for their custom error. Users can optionally define a log level for your error with the `LogLevel() logging.Level` methods #### Usage: ```go type customError struct { error string } func (c customError) Error() string { return fmt.Sprintf("custom error: %s", c.error) } func (c customError) StatusCode() int { return http.StatusMethodNotAllowed } func (c customError) LogLevel() logging.Level { return logging.WARN } ``` ## Extended Error Responses For [RFC 9457](https://www.rfc-editor.org/rfc/rfc9457.html) style error responses with additional fields, implement the ResponseMarshaller interface: ```go type ResponseMarshaller interface { Response() map[string]any } ``` #### Usage: ```go type ValidationError struct { Field string Message string Code int } func (e ValidationError) Error() string { return e.Message } func (e ValidationError) StatusCode() int { return e.Code } func (e ValidationError) Response() map[string]any { return map[string]any{ "field": e.Field, "type": "validation_error", "details": "Invalid input format", } } ``` > [!NOTE] > The `message` field is automatically populated from the `Error()` method. Custom fields with the name "message" in the `Response()` map should not be used as they will be ignored in favor of the `Error()` value. --- ## https://gofr.dev/docs/advanced-guide/graphql # GraphQL in GoFr GoFr provides a **Schema-First** approach to building GraphQL APIs. This means you define your API contract in a standard GraphQL schema file, and GoFr handles the execution, validation, and observability. ## Required Setup To enable GraphQL, you MUST provide a schema file at the following location: `./configs/schema.graphqls` > **Note:** GoFr uses a single schema file. All Query and Mutation types must be defined in this one file. > You can register multiple resolvers (one per field) using `GraphQLQuery` and `GraphQLMutation`, but > they all resolve fields within this single schema. If this file is missing or invalid, GoFr will log a fatal error and the application will fail to start. This fail-fast behavior ensures schema issues are caught at deployment rather than runtime. ## Core Concepts ### 1. [Query](https://graphql.org/learn/queries/) Queries are used to fetch data. In GoFr, a Query resolver is a function that takes `*gofr.Context` and returns a data object (or `any`) and an error. ### 2. [Mutation](https://graphql.org/learn/queries/#mutations) Mutations are used to modify data. They follow the same signature as Queries but are intended for side effects. ## The Unified Schema GoFr aggregates every `GraphQLQuery` and `GraphQLMutation` you register and validates them against your `./configs/schema.graphqls`. The API is served at `/graphql`. * **Single Endpoint**: All operations go through `POST /graphql`. * **Playground**: Interactive documentation and testing at `/.well-known/graphql/ui`. --- ## Getting Started ### 1. Define your Schema Create `configs/schema.graphqls`: ```graphql type User { id: Int name: String } type Query { user(id: Int): User } ``` ### 2. Register Resolvers In GoFr, resolvers strictly take `*gofr.Context`. You use `c.Bind()` to extract arguments. ```go type User struct { ID int `json:"id"` Name string `json:"name"` } func main() { app := gofr.New() app.GraphQLQuery("user", func(c *gofr.Context) (any, error) { var args struct { ID int `json:"id"` } if err := c.Bind(&args); err != nil { return nil, err } // Return a struct - GoFr validates this against the schema at runtime return User{ ID: args.ID, Name: "Antigravity", }, nil }) app.Run() } ``` --- ## Schema-First Features ### 1. Returns `any` Unlike standard HTTP handlers which allow `any` but lose structure, GraphQL handlers in GoFr return `any` while **maintaining the contract** defined in your `.graphqls` file. - GoFr leverages the underlying `graphql-go` engine to validate the returned object against your defined schema. - If the object does not match the schema types, GoFr returns an error in the `errors` array with partial data where applicable. ### 2. HTTP Status Codes GoFr follows the standard GraphQL-over-HTTP convention by returning `200 OK` for all successfully processed requests, including those with resolver errors. This ensures that the response body is the source of truth for execution results. | Status Code | Condition | |---|---| | `200 OK` | The request was processed (regardless of whether it returned data or errors). | | `400 Bad Request` | The request body is not valid JSON. | **Error response body**: > **Note:** The GraphQL error format follows the [GraphQL specification](https://spec.graphql.org/October2021/#sec-Errors), > which uses an `errors` array. This differs from GoFr's REST API format which uses a singular `error` object. > This is intentional — each protocol follows its own standard. ```json { "data": null, "errors": [ { "message": "your error message here", "locations": [{ "line": 1, "column": 3 }], "path": ["fieldName"] } ] } ``` ### 3. Argument Binding Instead of declarative arguments in the function signature, you use the standard `c.Bind()` method. GoFr automatically maps the GraphQL `args` map to your struct using JSON tags. ### 4. Supported Types GoFr supports all standard GraphQL types including scalars, objects, enums, and input types. For a complete reference on the GraphQL type system, see the [official GraphQL documentation](https://graphql.org/learn/schema/). --- ## Testing Your GraphQL API ### 1. Interactive Exploration GoFr automatically hosts a **GraphQL Playground** at `/.well-known/graphql/ui` when GraphQL resolvers are registered. ### 2. Standard POST Requests The `/graphql` endpoint accepts a JSON body with the following fields: | Field | Type | Description | |---|---|---| | `query` | `string` | **Required.** The GraphQL query or mutation string. | | `operationName` | `string` | Optional. The name of the operation to execute (used for metrics tagging). | | `variables` | `object` | Optional. A map of variable values for the query. | **Simple query:** ```bash curl -X POST \ -H "Content-Type: application/json" \ -d '{"query": "{ user(id: 1) { name } }"}' \ http://localhost:8000/graphql ``` **Named operation with variables:** ```bash curl -X POST \ -H "Content-Type: application/json" \ -d '{"query": "query GetUser($id: Int) { user(id: $id) { name } }", "operationName": "GetUser", "variables": {"id": 1}}' \ http://localhost:8000/graphql ``` --- ## Observability GoFr provides production-grade observability for GraphQL out of the box. ### 1. Tracing GoFr automatically instruments your GraphQL API with OpenTelemetry traces: - **Root Span**: Every request generates a `graphql-request` span. - **Resolver Spans**: Each individual resolver call generates a nested span (e.g., `graphql-resolver-user`), allowing you to see the exact time spent in each field's business logic. - **Attributes**: The `graphql.operation_name` and `graphql.operation_type` (query/mutation) are automatically added to the spans. ### 2. Metrics GoFr exports several GraphQL-specific metrics, all tagged by `operation_name`, `type` (query/mutation), and `status` (success/error): - **`app_graphql_operations_total`**: Total number of GraphQL operations received. - **`app_graphql_error_total`**: Total operations that resulted in an error (resolver error or validation failure). - **`app_graphql_request_duration`**: Histogram of the entire request lifecycle in seconds. > **Note:** The `operation_name` tag is sourced from the `operationName` field in the POST body. For anonymous operations, it defaults to `"unknown"`. GraphQL requests are only recorded by the GraphQL-specific metrics above — they are excluded from `app_http_response` to avoid double-counting. --- ## Monitoring and Health Checks ### 1. Health Checks Even when building a GraphQL-first application, GoFr's standard **RESTful health check endpoints** remain the primary way to monitor service availability. These are automatically registered and publicly accessible: - **Aliveness**: `/.well-known/alive` (Returns `200 OK` if the server is running) - **Health**: `/.well-known/health` (Returns detailed dependency status) GoFr does **not** inject an automatic `health` query into your GraphQL schema. This avoids redundancy and keeps your GraphQL contract focused on business logic. ### 2. Status Metric Label While traditional HTTP metrics (`app_http_response`) use numerical status codes (e.g., `200`, `500`) for the `status` label, GraphQL metrics (`app_graphql_*`) use a simplified `success` or `error` value. - **`success`**: The request was processed and returned no errors in the `errors` array. - **`error`**: The request was processed but one or more resolvers failed (returning a `200 OK` with an `errors` array), or the request itself was invalid (e.g., `400 Bad Request`). This distinction is important because GraphQL often returns `200 OK` even when business logic fails. The `success`/`error` label provides immediate visibility into the health of your resolvers. --- ## Design and Limitations GoFr's GraphQL implementation is designed for simplicity and strict adherence to standards while maintaining the framework's "sane defaults" philosophy. ### 1. Why `GraphQLQuery` / `GraphQLMutation` instead of `app.POST`? GoFr provides dedicated `GraphQLQuery` and `GraphQLMutation` methods rather than reusing `app.POST("/graphql", ...)` because the framework handles schema validation, resolver dispatch, per-field tracing, and automatic metrics internally. A raw POST handler would require you to implement all of this manually. ### 2. Why POST-only? Per the [GraphQL-over-HTTP specification](https://github.com/graphql/graphql-over-http), all GraphQL operations (including Queries) should be performed via `POST`. - **Security**: Preventing Queries over `GET` avoids accidentally exposing sensitive parameters in server logs or browser history. - **Consistency**: All operations use the same interaction model, simplifying middleware and observability. ### 3. Why only Query and Mutation? Currently, GoFr supports the two most common operation types: - **Query**: For read-only data fetching. - **Mutation**: For operations that cause side effects. **Subscriptions** (real-time updates) are currently not supported as they require a persistent stateful connection (like WebSockets), which deviates from the stateless, request-response model of GoFr's standard HTTP handlers. ### 4. Single Schema File GoFr enforces a single `./configs/schema.graphqls` file to ensure a "Single Source of Truth" for your API contract. While you can register many resolvers, they must all belong to this single unified schema. This prevents fragmentation and makes the API easier to document and maintain. --- ## Best Practices 1. **Keep Schema and Logic in sync**: Since the schema is defined in a separate file, ensure field names in your Go maps/structs match the field names in `schema.graphqls`. 2. **Use c.Bind()**: Always use `c.Bind()` for accessing arguments to benefit from GoFr's internal mapping and validation. 3. **Error Handling**: Return errors from your handlers. GoFr will include them in the `errors` array of the GraphQL response while still returning `200 OK`. 4. **Name your operations**: Use `operationName` in your requests so that metrics are tagged meaningfully (e.g., `GetUser` instead of `unknown`). --- ## https://gofr.dev/docs/advanced-guide/grpc-streaming # gRPC Streaming with GoFr GoFr provides comprehensive support for gRPC streaming, enabling efficient real-time communication between services. Streaming is particularly useful for scenarios where you need to send or receive multiple messages over a single connection, such as chat applications, real-time data feeds, or large file transfers. GoFr supports three types of gRPC streaming: - **Server-side streaming**: The server sends multiple responses to a single client request - **Client-side streaming**: The client sends multiple requests and receives a single response - **Bidirectional streaming**: Both client and server can send multiple messages independently All streaming methods in GoFr include built-in tracing, metrics, and logging support, ensuring seamless observability for your streaming operations. ## Prerequisites Before implementing gRPC streaming, ensure you have: 1. **Protocol Buffer Compiler (`protoc`)** installed (version 3+) 2. **Go gRPC plugins** installed: ```bash go install google.golang.org/protobuf/cmd/protoc-gen-go@v1.28 go install google.golang.org/grpc/cmd/protoc-gen-go-grpc@v1.2 export PATH="$PATH:$(go env GOPATH)/bin" ``` 3. **gofr-cli** installed: ```bash go install gofr.dev/cli/gofr@latest ``` For detailed setup instructions, refer to the [gRPC with GoFr documentation](https://gofr.dev/docs/advanced-guide/grpc). ## Defining Streaming RPCs in Protocol Buffers To use streaming in your gRPC service, define your RPC methods with the `stream` keyword in your `.proto` file: ```protobuf syntax = "proto3"; option go_package = "path/to/your/proto/file"; message Request { string message = 1; } message Response { string message = 1; } service ChatService { // Server-side streaming: client sends one request, server sends multiple responses rpc ServerStream(Request) returns (stream Response); // Client-side streaming: client sends multiple requests, server sends one response rpc ClientStream(stream Request) returns (Response); // Bidirectional streaming: both client and server can send multiple messages rpc BiDiStream(stream Request) returns (stream Response); } ``` ## Generating gRPC Streaming Server Code GoFr CLI automatically generates streaming-aware server templates. Use the `gofr wrap grpc server` command: ```bash gofr wrap grpc server -proto=./path/to/your/proto/file ``` This command generates: - `_server.go`: Template file with streaming method signatures - `_gofr.go`: Generated wrapper with streaming instrumentation - `request_gofr.go`: Request wrapper for context binding - `health_gofr.go`: Health check server integration ### Server-Side Streaming Implementation Server-side streaming allows the server to send multiple responses to a single client request. This is useful for scenarios like real-time notifications or progressive data delivery. **Example Implementation:** ```go func (s *ChatServiceGoFrServer) ServerStream(ctx *gofr.Context, stream ChatService_ServerStreamServer) error { // Bind the initial request req := Request{} if err := ctx.Bind(&req); err != nil { return status.Errorf(codes.InvalidArgument, "invalid request: %v", err) } // Send multiple responses for i := 0; i < 5; i++ { // Check if context is canceled select { case <-stream.Context().Done(): return status.Error(codes.Canceled, "client disconnected") default: } resp := &Response{ Message: fmt.Sprintf("Server stream %d: %s", i, req.Message), } if err := stream.Send(resp); err != nil { return status.Errorf(codes.Internal, "error sending stream: %v", err) } time.Sleep(1 * time.Second) // Simulate processing delay } return nil } ``` **Key Points:** - Use `ctx.Bind()` to extract the initial request - Return appropriate gRPC status codes for binding errors - Check for context cancellation before each send operation - Call `stream.Send()` to send each response message - Return `nil` when streaming is complete, or an error if something goes wrong ### Client-Side Streaming Implementation Client-side streaming allows the client to send multiple requests before receiving a single response. This is useful for batch processing or aggregating data from the client. **Example Implementation:** ```go func (s *ChatServiceGoFrServer) ClientStream(ctx *gofr.Context, stream ChatService_ClientStreamServer) error { var messageCount int var finalMessage strings.Builder // Receive multiple messages from client for { // Check if context is canceled before receiving select { case <-stream.Context().Done(): return status.Error(codes.Canceled, "client disconnected") default: } req, err := stream.Recv() if err == io.EOF { // Client has finished sending, send final response return stream.SendAndClose(&Response{ Message: fmt.Sprintf("Received %d messages. Final: %s", messageCount, finalMessage.String()), }) } if err != nil { return status.Errorf(codes.Internal, "error receiving stream: %v", err) } // Process each message messageCount++ finalMessage.WriteString(req.Message + " ") } } ``` **Key Points:** - Check for context cancellation before each receive operation - Use `stream.Recv()` in a loop to receive messages - Check for `io.EOF` to detect when the client has finished sending - Return appropriate gRPC status codes for receive errors - Call `stream.SendAndClose()` to send the final response and close the stream - Process each message as it arrives ### Bidirectional Streaming Implementation Bidirectional streaming allows both client and server to send messages independently. This is useful for real-time chat applications or interactive protocols. **Example Implementation:** ```go func (s *ChatServiceGoFrServer) BiDiStream(ctx *gofr.Context, stream ChatService_BiDiStreamServer) error { errChan := make(chan error) // Handle incoming messages in a goroutine go func() { for { // Check if context is canceled select { case <-stream.Context().Done(): errChan <- status.Error(codes.Canceled, "client disconnected") return default: } req, err := stream.Recv() if err == io.EOF { break } if err != nil { errChan <- status.Errorf(codes.Internal, "error receiving stream: %v", err) return } // Process request and send response resp := &Response{Message: "Echo: " + req.Message} if err := stream.Send(resp); err != nil { errChan <- status.Errorf(codes.Internal, "error sending stream: %v", err) return } } errChan <- nil }() // Wait for completion or cancellation select { case err := <-errChan: return err case <-stream.Context().Done(): return status.Error(codes.Canceled, "client disconnected") } } ``` **Key Points:** - Use goroutines to handle concurrent send/receive operations - Check for context cancellation in the goroutine before receiving - Use `stream.Recv()` to receive messages - Use `stream.Send()` to send responses - Return appropriate gRPC status codes for errors - Monitor `stream.Context().Done()` to handle client disconnections - Use channels to coordinate between goroutines ## Generating gRPC Streaming Client Code Generate the client code using: ```bash gofr wrap grpc client -proto=./path/to/your/proto/file ``` This generates `_client.go` with streaming client interfaces. ### Server-Side Streaming Client Usage **Example Implementation:** ```go func (c *ChatHandler) ServerStreamHandler(ctx *gofr.Context) (any, error) { // Initiate server stream stream, err := c.chatClient.ServerStream(ctx, &client.Request{ Message: "stream request", }) if err != nil { return nil, fmt.Errorf("failed to initiate server stream: %v", err) } var responses []Response // Receive all streamed responses for { res, err := stream.Recv() if err != nil { if errors.Is(err, io.EOF) { break // Stream completed } return nil, fmt.Errorf("stream receive error: %v", err) } responses = append(responses, res) ctx.Logger.Infof("Received: %s", res.Message) } return responses, nil } ``` ### Client-Side Streaming Client Usage **Example Implementation:** ```go func (c *ChatHandler) ClientStreamHandler(ctx *gofr.Context) (any, error) { // Initiate client stream stream, err := c.chatClient.ClientStream(ctx) if err != nil { return nil, fmt.Errorf("failed to initiate client stream: %v", err) } // Get messages from request body var requests []*client.Request if err := ctx.Bind(&requests); err != nil { return nil, fmt.Errorf("failed to bind requests: %v", err) } // Send multiple messages for _, req := range requests { if err := stream.Send(req); err != nil { return nil, fmt.Errorf("failed to send request: %v", err) } } // Close stream and receive final response response, err := stream.CloseAndRecv() if err != nil { return nil, fmt.Errorf("failed to receive final response: %v", err) } return response, nil } ``` ### Bidirectional Streaming Client Usage **Example Implementation:** ```go func (c *ChatHandler) BiDiStreamHandler(ctx *gofr.Context) (any, error) { // Initiate bidirectional stream stream, err := c.chatClient.BiDiStream(ctx) if err != nil { return nil, fmt.Errorf("failed to initiate bidirectional stream: %v", err) } respChan := make(chan Response) errChan := make(chan error) // Receive responses in a goroutine go func() { for { res, err := stream.Recv() if err != nil { if errors.Is(err, io.EOF) { errChan <- nil } else { errChan <- err } return } respChan <- res } }() // Send messages messages := []string{"message 1", "message 2", "message 3"} for _, msg := range messages { if err := stream.Send(&client.Request{Message: msg}); err != nil { return nil, fmt.Errorf("failed to send message: %v", err) } } // Close send side if err := stream.CloseSend(); err != nil { return nil, fmt.Errorf("failed to close send: %v", err) } // Collect responses var responses []Response for { select { case err := <-errChan: return responses, err case resp := <-respChan: responses = append(responses, resp) case <-time.After(5 * time.Second): return nil, errors.New("timeout waiting for responses") } } } ``` ## Registering Streaming Services Register your streaming service in `main.go` just like unary services: ```go package main import ( "gofr.dev/examples/grpc/grpc-streaming-server/server" "gofr.dev/pkg/gofr" ) func main() { app := gofr.New() // Register streaming service server.RegisterChatServiceServerWithGofr(app, server.NewChatServiceGoFrServer()) app.Run() } ``` ## Built-in Observability GoFr automatically provides observability for all streaming operations: ### Metrics The following metrics are automatically registered: - **app_gRPC-Stream_stats**: Histogram tracking stream operation duration (Send, Recv, SendAndClose, CloseSend) - **app_gRPC-Client-Stream_stats**: Histogram for client-side streaming operations ### Tracing Each streaming operation (Send, Recv, SendAndClose, CloseSend) automatically creates spans for distributed tracing, allowing you to track the flow of messages through your system. ### Logging Streaming operations are automatically logged with: - Operation type (Send, Recv, etc.) - Method name - Duration - Error status (if any) ## Error Handling ### Common Streaming Errors 1. **`io.EOF`**: Indicates the stream has ended normally - In client-side streaming: Server should call `SendAndClose()` - In server-side/bidirectional streaming: Client has finished sending 2. **Context Cancellation**: Stream was canceled or timed out - Check `stream.Context().Done()` for cancellation - Return appropriate gRPC status codes 3. **Network Errors**: Connection issues during streaming - Handle gracefully and return appropriate error status **Example Error Handling:** ```go func (s *ChatServiceGoFrServer) ServerStream(ctx *gofr.Context, stream ChatService_ServerStreamServer) error { req := Request{} if err := ctx.Bind(&req); err != nil { return status.Errorf(codes.InvalidArgument, "invalid request: %v", err) } for i := 0; i < 5; i++ { // Check if context is canceled select { case <-stream.Context().Done(): return status.Error(codes.Canceled, "client disconnected") default: } resp := &Response{Message: fmt.Sprintf("Message %d", i)} if err := stream.Send(resp); err != nil { return status.Errorf(codes.Internal, "error sending stream: %v", err) } } return nil } ``` ## Adding Custom Stream interceptors For streaming RPCs (client-stream, server-stream, or bidirectional), GoFr allows you to add stream interceptors using `AddGRPCServerStreamInterceptors`. These are useful for handling logic that needs to span the entire lifetime of a stream. ```go func main() { app := gofr.New() app.AddGRPCServerStreamInterceptors(streamAuthInterceptor) // ... register your service app.Run() } func streamAuthInterceptor(srv any, ss grpc.ServerStream, info *grpc.StreamServerInfo, handler grpc.StreamHandler) error { // Example: Validate metadata for the entire stream md, ok := metadata.FromIncomingContext(ss.Context()) if !ok || !isValidToken(md["auth-token"]) { return status.Errorf(codes.Unauthenticated, "invalid stream token") } // If valid, continue processing the stream return handler(srv, ss) } ``` For more details on adding additional interceptors and server options, refer to the [official gRPC Go package](https://pkg.go.dev/google.golang.org/grpc#ServerOption). ## Best Practices 1. **Always handle `io.EOF`**: This is the normal way streams end 2. **Monitor context cancellation**: Use `stream.Context().Done()` to detect client disconnections 3. **Use goroutines for bidirectional streams**: Allows concurrent send/receive operations 4. **Close streams properly**: Call `CloseSend()` when done sending in bidirectional streams 5. **Handle errors gracefully**: Return appropriate gRPC status codes 6. **Use timeouts**: Set reasonable timeouts for stream operations 7. **Log important events**: Use `ctx.Logger` to log stream lifecycle events ## Examples Complete working examples are available in the GoFr repository: - **Server Example**: `gofr/examples/grpc/grpc-streaming-server` - **Client Example**: `gofr/examples/grpc/grpc-streaming-client` These examples demonstrate all three types of streaming with detailed error handling and logging. ## Further Reading - [gRPC with GoFr](https://gofr.dev/docs/advanced-guide/grpc) - General gRPC documentation - [gRPC Official Documentation](https://grpc.io/docs/what-is-grpc/introduction/) - Learn more about gRPC streaming concepts - [GoFr Examples](https://github.com/gofr-dev/gofr/tree/main/examples/grpc) - More gRPC examples --- ## https://gofr.dev/docs/advanced-guide/grpc # gRPC with Gofr We have already seen how GoFr can help ease the development of HTTP servers, but there are cases where performance is primarily required sacrificing flexibility. In these types of scenarios gRPC protocol comes into picture. {% new-tab-link title="gRPC" href="https://grpc.io/docs/what-is-grpc/introduction/" /%} is an open-source RPC(Remote Procedure Call) framework initially developed by Google. GoFr streamlines the creation of gRPC servers and clients with unified GoFr's context support. It provides built-in tracing, metrics, and logging to ensure seamless performance monitoring for both gRPC servers and inter-service gRPC communication. With GoFr's context, you can seamlessly define custom metrics and traces across gRPC handlers, ensuring consistent observability and streamlined debugging throughout your system. Additionally, GoFr provides a built-in health check for all your services and supports inter-service health checks, allowing gRPC services to monitor each other effortlessly. ## Prerequisites **1. Protocol Buffer Compiler (`protoc`) Installation:** - **Linux (using `apt` or `apt-get`):** ```bash sudo apt install -y protobuf-compiler protoc --version # Ensure compiler version is 3+ ``` - **macOS (using Homebrew):** ```bash brew install protobuf protoc --version # Ensure compiler version is 3+ ``` **2. Go Plugins for Protocol Compiler:** a. Install protocol compiler plugins for Go: ```bash go install google.golang.org/protobuf/cmd/protoc-gen-go@v1.28 go install google.golang.org/grpc/cmd/protoc-gen-go-grpc@v1.2 ``` b. Update `PATH` for `protoc` to locate the plugins: ```bash export PATH="$PATH:$(go env GOPATH)/bin" ``` ## Creating Protocol Buffers For a detailed guide, refer to the official gRPC documentation's tutorial: {% new-tab-link title="Tutorial" href="https://grpc.io/docs/languages/go/basics/" /%} at official gRPC docs. **1. Define Your Service and RPC Methods:** Create a `.proto` file (e.g., `customer.proto`) to define your service and the RPC methods it provides: ```protobuf // Indicates the protocol buffer version that is being used syntax = "proto3"; // Indicates the go package where the generated file will be produced option go_package = "path/to/your/proto/file"; service Service { rpc () returns () {} } ``` **2. Specify Request and Response Types:** Users must define the type of message being exchanged between server and client, for protocol buffer to serialize them when making a remote procedure call. Below is a generic representation for services' gRPC messages type. ```protobuf message { int64 id = 1; string name = 2; // other fields that can be passed } message { int64 id = 1; string name = 2; string address = 3; // other customer related fields } ``` **3. Generate Go Code:** Run the following command to generate Go code using the Go gRPC plugins: ```bash protoc \ --go_out=. \ --go_opt=paths=source_relative \ --go-grpc_out=. \ --go-grpc_opt=paths=source_relative \ .proto ``` This command generates two files, `.pb.go` and `_grpc.pb.go`, containing the necessary code for performing RPC calls. ## Prerequisite: gofr-cli must be installed To install the CLI - ```bash go install gofr.dev/cli/gofr@latest ``` ## Generating gRPC Server Handler Template using `gofr wrap grpc server` **1. Use the `gofr wrap grpc server` Command:** ```bash gofr wrap grpc server -proto=./path/your/proto/file ``` This command leverages the `gofr-cli` to generate a `_server.go` file (e.g., `customer_server.go`) containing a template for your gRPC server implementation, including context support, in the same directory as that of the specified proto file. **2. Modify the Generated Code:** - Customize the `GoFrServer` struct with required dependencies and fields. - Implement the `` method to handle incoming requests, as required in this usecase: - Bind the request payload using `ctx.Bind(&)`. - Process the request and generate a response. ## Registering the gRPC Service with Gofr **1. Import Necessary Packages:** ```go import ( "path/to/your/generated-grpc-server/packageName" "gofr.dev/pkg/gofr" ) ``` **2. Register the Service in your `main.go`:** ```go func main() { app := gofr.New() packageName.RegisterServerWithGofr(app, &.NewGoFrServer()) app.Run() } ``` >Note: By default, gRPC server will run on port 9000, to customize the port users can set `GRPC_PORT` config in the .env ## Adding gRPC Server Options To customize your gRPC server, use `AddGRPCServerOptions()`. ### Example: Enabling TLS & other ServerOptions ```go func main() { app := gofr.New() // Add TLS credentials and connection timeout in one call creds, _ := credentials.NewServerTLSFromFile("server-cert.pem", "server-key.pem") app.AddGRPCServerOptions( grpc.Creds(creds), grpc.ConnectionTimeout(10 * time.Second), ) packageName.RegisterServerWithGofr(app, &.NewGoFrServer()) app.Run() } ``` ## Adding Custom Unary Interceptors Interceptors help in implementing authentication, validation, request transformation, and error handling. ### Example: Authentication Interceptor ```go func main() { app := gofr.New() app.AddGRPCUnaryInterceptors(authInterceptor) packageName.RegisterServerWithGofr(app, &.NewGoFrServer()) app.Run() } func authInterceptor(ctx context.Context, req any, info *grpc.UnaryServerInfo, handler grpc.UnaryHandler) (any, error) { if !isAuthenticated(ctx) { return nil, status.Errorf(codes.Unauthenticated, "authentication failed") } return handler(ctx, req) } ``` ## Adding Custom Stream interceptors For streaming RPCs (client-stream, server-stream, or bidirectional), GoFr allows you to add stream interceptors using `AddGRPCServerStreamInterceptors`. These are useful for handling logic that needs to span the entire lifetime of a stream. ```go func main() { app := gofr.New() app.AddGRPCServerStreamInterceptors(streamAuthInterceptor) // ... register your service app.Run() } func streamAuthInterceptor(srv any, ss grpc.ServerStream, info *grpc.StreamServerInfo, handler grpc.StreamHandler) error { // Example: Validate metadata for the entire stream md, ok := metadata.FromIncomingContext(ss.Context()) if !ok || !isValidToken(md["auth-token"]) { return status.Errorf(codes.Unauthenticated, "invalid stream token") } // If valid, continue processing the stream return handler(srv, ss) } ``` For more details on adding additional interceptors and server options, refer to the [official gRPC Go package](https://pkg.go.dev/google.golang.org/grpc#ServerOption). ## Rate Limiter Interceptor for gRPC GoFr provides built-in rate limiter interceptors for gRPC to protect your services from abuse and ensure fair resource distribution. It uses the same token bucket algorithm and configuration as the HTTP rate limiter, applied to both unary and streaming RPCs. ### Features - **Token Bucket Algorithm**: Allows smooth rate limiting with configurable burst capacity - **Per-IP Rate Limiting**: Each client IP gets its own rate limit bucket (configurable) - **Unary and Stream Support**: Separate interceptors for unary RPCs and streaming RPCs - **Prometheus Metrics**: Track rate limit violations via `app_grpc_rate_limit_exceeded_total` counter - **gRPC Status Code**: Returns `RESOURCE_EXHAUSTED` (gRPC code 8) with a `retry-after` metadata header when the limit is exceeded ### Configuration ```go import ( "context" "gofr.dev/pkg/gofr" gofrGrpc "gofr.dev/pkg/gofr/grpc" "gofr.dev/pkg/gofr/http/middleware" ) func main() { app := gofr.New() // ctx controls the lifetime of the rate limiter's background cleanup goroutine. // Canceling this context stops cleanup gracefully, preventing goroutine leaks // during rolling restarts. In production, tie this to your server's shutdown signal. ctx, cancel := context.WithCancel(context.Background()) defer cancel() // Configure rate limiter (shared config for both unary and stream) cfg := middleware.RateLimiterConfig{ RequestsPerSecond: 5, // Average requests per second Burst: 10, // Maximum burst size PerIP: true, // Enable per-IP limiting } // IMPORTANT: create ONE shared store if you want a single budget // for both unary and stream RPCs. If Store is left nil, each // interceptor will create its own in-memory store and limits // will be enforced independently. store := middleware.NewMemoryRateLimiterStore(cfg) cfg.Store = store // Add rate limiter interceptors for gRPC app.AddGRPCUnaryInterceptors(gofrGrpc.UnaryRateLimitInterceptor(ctx, cfg, app.Logger(), app.Metrics())) app.AddGRPCServerStreamInterceptors(gofrGrpc.StreamRateLimitInterceptor(ctx, cfg, app.Logger(), app.Metrics())) // Register your gRPC service packageName.RegisterServerWithGofr(app, &packageName.NewGoFrServer()) app.Run() } ``` > **Note**: The example above creates a single shared store so unary and stream RPCs draw from the **same** token bucket. > If you want **independent limits** for each call type (e.g., high throughput for unary, tight limits for streams), > omit the shared store and pass separate configs — see [Separate Limits for Unary and Stream RPCs](#separate-limits-for-unary-and-stream-rpcs) below. > **Graceful Shutdown**: The `ctx` parameter controls the lifetime of the background cleanup goroutine that evicts expired token buckets. > Cancel this context when the server shuts down to prevent goroutine leaks during rolling restarts. ### Parameters The gRPC rate limiter uses the same `middleware.RateLimiterConfig` as the HTTP rate limiter: - `RequestsPerSecond`: Average number of requests allowed per second - `Burst`: Maximum number of requests that can be made in a burst (allows temporary spikes) - `PerIP`: Set to `true` for per-IP limiting (recommended) or `false` for a global rate limit across all clients - `TrustedProxies`: *(Optional)* Set to `true` to trust `X-Forwarded-For` and `X-Real-IP` gRPC metadata headers for IP extraction. Only enable when behind a trusted reverse proxy. > **Security Warning**: Only set `TrustedProxies: true` if your application is behind a trusted reverse proxy (nginx, ALB, etc.). > Without a trusted proxy, clients can spoof metadata headers to bypass rate limits. ### Behavior on Rate Limit Exceeded When a client exceeds the rate limit: 1. The interceptor returns a gRPC error with status code `RESOURCE_EXHAUSTED` 2. A `retry-after` response metadata header is set, indicating how many seconds the client should wait before retrying 3. The `app_grpc_rate_limit_exceeded_total` Prometheus counter is incremented with `method` and `type` (`unary` or `stream`) labels ### Separate Limits for Unary and Stream RPCs Unary calls and stream connections often have very different resource costs. You can pass independent configurations to each interceptor to enforce separate budgets — for example, allowing a high rate for lightweight unary calls while tightly limiting new stream connections: ```go unaryCfg := middleware.RateLimiterConfig{ RequestsPerSecond: 100, // High throughput for lightweight unary calls Burst: 50, PerIP: true, } streamCfg := middleware.RateLimiterConfig{ RequestsPerSecond: 5, // Streams are long-lived and expensive Burst: 3, PerIP: true, } app.AddGRPCUnaryInterceptors(gofrGrpc.UnaryRateLimitInterceptor(ctx, unaryCfg, app.Logger(), app.Metrics())) app.AddGRPCServerStreamInterceptors(gofrGrpc.StreamRateLimitInterceptor(ctx, streamCfg, app.Logger(), app.Metrics())) ``` Each config creates its own store (when `Store` is nil), so the limits are completely independent. If you instead want a **single shared budget** across both call types, create one store and assign it to both configs as shown in the [Configuration](#configuration) example above. ## Generating gRPC Client using `gofr wrap grpc client` **1. Use the `gofr wrap grpc client` Command:** ```bash gofr wrap grpc client -proto=./path/your/proto/file ``` This command leverages the `gofr-cli` to generate a `_client.go` file (e.g., `customer_client.go`). This file must not be modified. **2. Register the connection to your gRPC service inside your and make inter-service calls as follows :** ```go // gRPC Handler with context support func (ctx *gofr.Context) (*, error) { // Create the gRPC client srv, err := NewGoFrClient("your-grpc-server-host", ctx.Metrics()) if err != nil { return nil, err } // Prepare the request req := &{ // populate fields as necessary } // Call the gRPC method with tracing/metrics enabled res, err := srv.(ctx, req) if err != nil { return nil, err } return res, nil } ``` ## Error Handling and Validation GoFr's gRPC implementation includes built-in error handling and validation: **Port Validation**: Automatically validates that gRPC ports are within valid range (1-65535) **Port Availability**: Checks if the specified port is available before starting the server **Server Creation**: Validates server creation and provides detailed error messages **Container Injection**: Validates container injection into gRPC services with detailed logging Port Configuration ```bash // Set custom gRPC port in .env file GRPC_PORT=9001 // Or use default port 9000 if not specified ``` ## gRPC Reflection GoFr supports gRPC reflection for easier debugging and testing. Enable it using the configuration: ```bash # In your .env file GRPC_ENABLE_REFLECTION=true ``` When enabled, you can use tools like grpcurl to inspect and test your gRPC services: ```bash # List available services grpcurl -plaintext localhost:9000 list # Describe a service grpcurl -plaintext localhost:9000 describe YourService # Make a test call grpcurl -plaintext -d '{"name": "test"}' localhost:9000 YourService/YourMethod ``` ## Built-in Metrics GoFr automatically registers the following gRPC metrics: + **grpc_server_status**: Gauge indicating server status (1=running, 0=stopped) + **grpc_server_errors_total**: Counter for total gRPC server errors + **grpc_services_registered_total**: Counter for total registered gRPC services These metrics are automatically available in your metrics endpoint and can be used for monitoring and alerting. ## Customizing gRPC Client with DialOptions GoFr provides flexibility to customize your gRPC client connections using gRPC `DialOptions`. This allows users to configure aspects such as transport security, interceptors, and load balancing policies. You can pass optional parameters while creating your gRPC client to tailor the connection to your needs. Here’s an example of a Unary Interceptor that sets metadata on outgoing requests: ```go func main() { app := gofr.New() // Create a gRPC client for the service gRPCClient, err := client.NewGoFrClient( app.Config.Get("GRPC_SERVER_HOST"), app.Metrics(), grpc.WithChainUnaryInterceptor(MetadataUnaryInterceptor), ) if err != nil { app.Logger().Errorf("Failed to create gRPC client: %v", err) return } greet := NewGreetHandler(gRPCClient) app.GET("/hello", greet.Hello) app.Run() } // MetadataUnaryInterceptor sets a custom metadata value on outgoing requests func MetadataUnaryInterceptor(ctx context.Context, method string, req, reply any, cc *grpc.ClientConn, invoker grpc.UnaryInvoker, opts ...grpc.CallOption) error { md := metadata.Pairs("client-id", "GoFr-Client-123") ctx = metadata.NewOutgoingContext(ctx, md) err := invoker(ctx, method, req, reply, cc, opts...) if err != nil { return fmt.Errorf("Error in %s: %v", method, err) } return err } ``` This interceptor sets a metadata key `client-id` with a value of `GoFr-Client-123` for each request. Metadata can be used for authentication, tracing, or custom behaviors. ### Using TLS Credentials and Advanced Service Config By default, gRPC connections in GoFr are made over insecure connections, which is not recommended for production. You can override this behavior using TLS credentials. Additionally, a more comprehensive service configuration can define retry policies and other settings: ```go import ( "google.golang.org/grpc" "google.golang.org/grpc/credentials" ) // The default serviceConfig in GoFr only sets the loadBalancingPolicy to "round_robin". const serviceConfig = `{ "loadBalancingPolicy": "round_robin", "methodConfig": [{ "name": [{"service": "HelloService"}], "retryPolicy": { "maxAttempts": 4, "initialBackoff": "0.1s", "maxBackoff": "1s", "backoffMultiplier": 2.0, "retryableStatusCodes": ["UNAVAILABLE", "RESOURCE_EXHAUSTED"] } }] }` func main() { app := gofr.New() creds, err := credentials.NewClientTLSFromFile("path/to/cert.pem", "") if err != nil { app.Logger().Errorf("Failed to load TLS certificate: %v", err) return } gRPCClient, err := client.NewGoFrClient( app.Config.Get("GRPC_SERVER_HOST"), app.Metrics(), grpc.WithTransportCredentials(creds), grpc.WithDefaultServiceConfig(serviceConfig), ) if err != nil { app.Logger().Errorf("Failed to create gRPC client: %v", err) return } greet := NewGreetHandler(gRPCClient) app.GET("/hello", greet.Hello) app.Run() } ``` In this example: - `WithTransportCredentials` sets up TLS security. - `WithDefaultServiceConfig` defines retry policies with exponential backoff and specific retryable status codes. ### Further Reading For more details on configurable DialOptions, refer to the [official gRPC package for Go](https://pkg.go.dev/google.golang.org/grpc#DialOption). ## HealthChecks in GoFr's gRPC Service/Clients Health Checks in GoFr's gRPC Services GoFr provides built-in health checks for gRPC services, enabling observability, monitoring, and inter-service health verification. ### Client Interface ```go type GoFrClient interface { SayHello(*gofr.Context, *HelloRequest, ...grpc.CallOption) (*HelloResponse, error) health } type health interface { Check(ctx *gofr.Context, in *grpc_health_v1.HealthCheckRequest, opts ...grpc.CallOption) (*grpc_health_v1.HealthCheckResponse, error) Watch(ctx *gofr.Context, in *grpc_health_v1.HealthCheckRequest, opts ...grpc.CallOption) (grpc.ServerStreamingClient[grpc_health_v1.HealthCheckResponse], error) } ``` ### Server Integration ```go type GoFrServer struct { health *healthServer } ``` Supported Methods for HealthCheck : ```go func (h *healthServer) Check(ctx *gofr.Context, req *grpc_health_v1.HealthCheckRequest) (*grpc_health_v1.HealthCheckResponse, error) func (h *healthServer) Watch(ctx *gofr.Context, in *grpc_health_v1.HealthCheckRequest, stream grpc_health_v1.Health_WatchServer) error func (h *healthServer) SetServingStatus(ctx *gofr.Context, service string, status grpc_health_v1.HealthCheckResponse_ServingStatus) func (h *healthServer) Shutdown(ctx *gofr.Context) func (h *healthServer) Resume(ctx *gofr.Context) ``` > ##### Check out the example of setting up a gRPC server/client in GoFr: [Visit GitHub](https://github.com/gofr-dev/gofr/tree/main/examples/grpc) --- ## https://gofr.dev/docs/advanced-guide/handling-data-migrations # Handling Data Migrations If you make manual changes to your database, you must inform other developers so they can apply the same changes. Additionally, you need to keep track of which changes should be applied to production machines in the next deployment. GoFr supports data migrations for MySQL, Postgres, Redis, ClickHouse & Cassandra which allows you to modify database state — such as adding columns, changing data types, adding constraints, or managing keys. ## Usage ### Creating Migration Files It is recommended to maintain a `migrations` directory in your project root to enhance readability and maintainability. **Migration file names** It is recommended that each migration file should be numbered in the format of _YYYYMMDDHHMMSS_ when the migration was created. This prevents numbering conflicts and ensures migrations sort correctly across different filesystems. Run the following commands to create a migration file ```shell # Install GoFr CLI go install gofr.dev/cli/gofr@latest # Create migration gofr migrate create -name=create_employee_table ``` Add the `createEmployeeTable` function given below in the created file in `migrations` directory. **Filename : 20240226153000_create_employee_table.go** ```go package migrations import "gofr.dev/pkg/gofr/migration" const createTable = `CREATE TABLE IF NOT EXISTS employee ( id int not null primary key, name varchar(50) not null, gender varchar(6) not null, contact_number varchar(10) not null );` func createEmployeeTable() migration.Migrate { return migration.Migrate{ UP: func(d migration.Datasource) error { _, err := d.SQL.Exec(createTable) if err != nil { return err } return nil }, } } ``` `migration.Datasource` contains the supported datasources, i.e., Redis and SQL (MySQL and PostgreSQL). All migrations run within a transaction. For MySQL, use `IF EXISTS` and `IF NOT EXISTS` in DDL commands because MySQL implicitly commits these statements. **Create a function which returns all the migrations in a map** **Filename : all.go** ```go package migrations import "gofr.dev/pkg/gofr/migration" func All() map[int64]migration.Migrate { return map[int64]migration.Migrate{ 20240226153000: createEmployeeTable(), } } ``` Migrations run in ascending order of keys in this map. > **Best Practice:** Before creating multiple migrations, learn about [organizing migrations by feature](#organizing-migrations-by-feature) to avoid creating one migration per table or operation. ### Initialization from main.go ```go package main import ( "gofr.dev/examples/using-migrations/migrations" "gofr.dev/pkg/gofr" ) func main() { // Create a new application a := gofr.New() // Add migrations to run a.Migrate(migrations.All()) // Run the application a.Run() } ``` When we run the app we will see the following log line for migrations which ran successfully. GoFr emits structured JSON to stdout by default: ```json {"level":"INFO","time":"2024-02-26T16:55:46.123456789+05:30","message":"Migration 20240226153000 ran successfully","gofrVersion":"v1.56.4"} ``` GoFr maintains the records in the database itself which helps in tracking which migrations have already been executed and ensures that only migrations that have never been run are executed. ## Organizing Migrations by Feature **Important:** Migrations should be organized by **feature**, not by individual database operations. The migration history should tell the story of feature evolution, not database operation granularity. ### Bad Practice: One Migration Per Operation A common mistake is to create one migration for each table or operation, even when they're part of the same feature: ```go func All() map[int64]migration.Migrate { return map[int64]migration.Migrate{ 20251114000001: createTableUsers(), 20251114000002: createTableMonitors(), 20251114000003: createTableCheckResults(), 20251114000004: createTableIncidents(), } } ``` **Why this is problematic:** - When reverting a feature, you want to revert all related changes together - When deploying, you want to deploy the entire feature atomically - Having multiple migrations for a single feature creates unnecessary complexity and potential inconsistencies ### Good Practice: One Migration Per Feature Instead, group all database operations related to a single feature into one migration: ```go func All() map[int64]migration.Migrate { return map[int64]migration.Migrate{ 20251114000001: addMonitoringFeature(), // Creates all 4 tables together } } func addMonitoringFeature() migration.Migrate { return migration.Migrate{ UP: func(d migration.Datasource) error { // Create all tables for the monitoring feature if _, err := d.SQL.Exec(createTableUsers); err != nil { return err } if _, err := d.SQL.Exec(createTableMonitors); err != nil { return err } if _, err := d.SQL.Exec(createTableCheckResults); err != nil { return err } if _, err := d.SQL.Exec(createTableIncidents); err != nil { return err } return nil }, } } ``` **Benefits of this approach:** - **Atomic deployment:** The entire feature is deployed or reverted together - **Clear history:** Migration history reflects feature evolution, not granular operations - **Easier rollback:** Reverting a feature means reverting one migration, not tracking multiple related migrations - **Better organization:** Related changes stay together, making the codebase easier to understand ## Multi-Instance Deployments When running multiple instances of your application (e.g., in Kubernetes or Docker Swarm), GoFr automatically coordinates migrations to ensure only one instance runs them at a time. ### How It Works 1. **Automatic Coordination:** When multiple instances start simultaneously, they coordinate using distributed locks 2. **One Runs, Others Wait:** The first instance to acquire the lock runs migrations, while others wait 3. **Fast Path:** If migrations are already complete, instances return immediately without acquiring locks ### Lock Mechanism **SQL (MySQL/PostgreSQL/SQLite):** - Uses a dedicated `gofr_migration_locks` table - Lock TTL: 15 seconds - Heartbeat: Refreshes every 5 seconds for long migrations **Redis:** - Uses `SETNX` with TTL - Lock TTL: 15 seconds - Heartbeat: Refreshes every 5 seconds for long migrations **Retry Behavior:** - Max retries: Indefinite (pods wait until migration is complete) - Retry interval: 500ms ### What This Means for You **✅ No code changes needed** - Locking happens automatically **✅ Safe deployments** - Multiple instances won't corrupt data **✅ Long migrations supported** - Locks are automatically extended via heartbeat **✅ Crash recovery** - Locks auto-expire after 15 seconds if a pod crashes ### Example Deployment \`\`\`yaml # docker-compose.yaml or Kubernetes deployment services: app: image: myapp:latest replicas: 3 # All 3 instances coordinate automatically \`\`\` When you deploy: - Instance 1: Acquires lock → Runs migrations → Releases lock - Instance 2: Waits for lock → Sees migrations complete → Continues startup - Instance 3: Waits for lock → Sees migrations complete → Continues startup > **Note:** Single-instance deployments work exactly as before with no performance impact. ## Migration Records **SQL** Migration records are stored in **gofr_migrations** table which has the following schema: {% table %} - Field - Type --- - version - bigint --- - method - varchar(4) --- - start_time - timestamp --- - duration - bigint --- {% /table %} **REDIS** Migration records are stored and maintained in a Redis Hash named **gofr_migrations** where key is the version and value contains other details in JSON format. Example : Key: 20240226153000 Value: {"method":"UP","startTime":"2024-02-26T15:03:46.844558+05:30","duration":0} Explanation: **Version** : The migration version is the numeric key defined in the map. **Start Time** : Time when the migration started in UTC. **Duration** : Time taken by Migration since it started in milliseconds. **Method** : It indicates whether the migration ran in UP or DOWN mode. (For now only method UP is supported) > **Note**: For Redis migration using **Streams mode**, a consumer group ID is mandatory. An empty group ID will result in an error during subscription, however, publishing will still succeed. ### Migrations in Cassandra `GoFr` provides support for migrations in Cassandra but does not guarantee atomicity for individual DML commands. To achieve atomicity during migrations, users can leverage batch operations using the `NewBatch`, `BatchQuery`, and `ExecuteBatch` methods. These methods allow multiple queries to be executed as a single atomic operation. Alternatively, users can construct their batch queries using the `BEGIN BATCH` and `APPLY BATCH` statements to ensure that all the commands within the batch are executed successfully or not at all. This is particularly useful for complex migrations involving multiple inserts, updates, or schema changes in a single transaction-like operation. When using batch operations, consider using a `LoggedBatch` for atomicity or an `UnloggedBatch` for improved performance where atomicity isn't required. This approach helps maintain data consistency in complex migrations. > Note: The following example assumes that users have already created the `KEYSPACE` in Cassandra. A `KEYSPACE` in Cassandra is a container for tables that defines data replication settings across the cluster. ```go package migrations import ( "gofr.dev/pkg/gofr/migration" ) const ( createTableCassandra = `CREATE TABLE IF NOT EXISTS employee ( id int PRIMARY KEY, name text, gender text, number text );` addCassandraRecords = `BEGIN BATCH INSERT INTO employee (id, name, gender, number) VALUES (1, 'Alison', 'F', '1234567980'); INSERT INTO employee (id, name, gender, number) VALUES (2, 'Alice', 'F', '9876543210'); APPLY BATCH; ` employeeDataCassandra = `INSERT INTO employee (id, name, gender, number) VALUES (?, ?, ?, ?);` ) func createEmployeeTableCassandra() migration.Migrate { return migration.Migrate{ UP: func(d migration.Datasource) error { // Execute the create table statement if err := d.Cassandra.Exec(createTableCassandra); err != nil { return err } // Batch processes can also be executed in Exec as follows: if err := d.Cassandra.Exec(addCassandraRecords); err != nil { return err } // Create a new batch operation batchName := "employeeBatch" if err := d.Cassandra.NewBatch(batchName, 0); err != nil { // 0 for LoggedBatch return err } // Add multiple queries to the batch if err := d.Cassandra.BatchQuery(batchName, employeeDataCassandra, 1, "Harry", "M", "1234567980"); err != nil { return err } if err := d.Cassandra.BatchQuery(batchName, employeeDataCassandra, 2, "John", "M", "9876543210"); err != nil { return err } // Execute the batch operation if err := d.Cassandra.ExecuteBatch(batchName); err != nil { return err } return nil }, } } ``` ## Migrations in Elasticsearch GoFr supports Elasticsearch document migrations, including **single-document** and **bulk operations**. ### Single Document Migration ```go func addSingleProduct() migration.Migrate { return migration.Migrate{ UP: func(d migration.Datasource) error { product := map[string]any{ "title": "Laptop", "price": 999.99, "category": "electronics", } return d.Elasticsearch.IndexDocument( context.Background(), "products", "1", product, ) }, } } ``` ### Bulk Operation Migration ```go func bulkProducts() migration.Migrate { return migration.Migrate{ UP: func(d migration.Datasource) error { operations := []map[string]any{ {"index": map[string]any{"_index": "products", "_id": "1"}}, {"title": "Phone", "price": 699.99, "category": "electronics"}, {"index": map[string]any{"_index": "products", "_id": "2"}}, {"title": "Mug", "price": 12.99, "category": "kitchen"}, } _, err := d.Elasticsearch.Bulk(context.Background(), operations) return err },} } ``` ## PubSub in Migrations GoFr provides support for interacting with PubSub systems during migrations. This is particularly useful for setting up your infrastructure (e.g., creating or deleting topics) before your application logic starts using them. GoFr does not store migration records in PubSub. Migration version tracking is handled exclusively by primary data stores (SQL or Redis) that support atomicity and locking. This is because many PubSub backends (like Redis Streams or Kafka) persist messages even after they are consumed. If the PubSub bus were used as a source of truth for migration versions, stale data from previous runs or other environments could interfere with the migration process, causing legitimate migrations to be skipped. ### Configuration Requirements When using PubSub in migrations, keep in mind the configuration requirements of your backend: - **Publishing**: Generally only requires connection details (brokers, host, etc.). - **Subscribing**: Requires a **Consumer Group ID** (e.g., `CONSUMER_ID` for Kafka or `REDIS_STREAMS_CONSUMER_GROUP` for Redis Streams). An empty or missing value will cause an error when attempting to subscribe, whereas publishing will still function correctly. ### Usage Examples You can use the `PubSub` data source inside your `UP` migrations just like any other driver. **Creating a topic during migration:** ```go func setupMessagingFeature() migration.Migrate { return migration.Migrate{ UP: func(d migration.Datasource) error { // Create a topic required for the new feature if err := d.PubSub.CreateTopic(context.Background(), "user-registrations"); err != nil { return err } return nil }, } } ``` **Publishing a message to an existing topic (topic not created by migration):** ```go func seedInitialEvents() migration.Migrate { return migration.Migrate{ UP: func(d migration.Datasource) error { // Publish a seed message to a pre-existing topic return d.PubSub.Publish(context.Background(), "order-events", []byte(`{"event":"system-initialized"}`)) }, } } ``` > ##### Check out the example to add and run migrations in GoFr: [Visit GitHub](https://github.com/gofr-dev/gofr/blob/main/examples/using-migrations/main.go) ## Related production guides - **DB Migrations in CI/CD**: [Run migrations safely from a pipeline](/docs/guides/db-migrations-in-cicd) — ordering, gating, and rollback strategies for managed deploys. --- ## https://gofr.dev/docs/advanced-guide/handling-file # Handling File GoFr simplifies the complexity of working with different file stores by offering a uniform API. This allows developers to interact with different storage systems using the same set of methods, without needing to understand the underlying implementation details of each file store. ## USAGE By default, local file-store is initialized and user can access it from the context. GoFr also supports FTP/SFTP file-store. Developers can also connect and use their cloud storage bucket as a file-store. Following cloud storage options are currently supported: - **AWS S3** - **Google Cloud Storage (GCS)** - **Azure File Storage** The file-store can be initialized as follows: ### FTP file-store ```go package main import ( "gofr.dev/pkg/gofr" "gofr.dev/pkg/gofr/datasource/file/ftp" ) func main() { app := gofr.New() app.AddFileStore(ftp.New(&ftp.Config{ Host: "127.0.0.1", User: "user", Password: "password", Port: 21, RemoteDir: "/ftp/user", })) app.Run() } ``` ### SFTP file-store ```go package main import ( "gofr.dev/pkg/gofr" "gofr.dev/pkg/gofr/datasource/file/sftp" ) func main() { app := gofr.New() app.AddFileStore(sftp.New(&sftp.Config{ Host: "127.0.0.1", User: "user", Password: "password", Port: 22, })) app.Run() } ``` ### AWS S3 Bucket as File-Store To run S3 File-Store locally we can use localstack, `docker run --rm -it -p 4566:4566 -p 4510-4559:4510-4559 localstack/localstack` ```go package main import ( "gofr.dev/pkg/gofr" "gofr.dev/pkg/gofr/datasource/file/s3" ) func main() { app := gofr.New() // Note that currently we do not handle connections through session token. // BaseEndpoint is not necessary while connecting to AWS as it automatically resolves it on the basis of region. // However, in case we are using any other AWS compatible service, such like running or testing locally, then this needs to be set. // Note that locally, AccessKeyID & SecretAccessKey is not checked if we use localstack. app.AddFileStore(s3.New(&s3.Config{ EndPoint: "http://localhost:4566", BucketName: "gofr-bucket-2", Region: "us-east-1", AccessKeyID: app.Config.Get("AWS_ACCESS_KEY_ID"), SecretAccessKey: app.Config.Get("AWS_SECRET_ACCESS_KEY"), })) app.Run() } ``` > Note: The current implementation supports handling only one bucket at a time, > as shown in the example with `gofr-bucket-2`. Bucket switching mid-operation is not supported. ### Google Cloud Storage (GCS) Bucket as File-Store **Local Setup with fake-gcs-server:** 1. Start fake-gcs-server with HTTP: ```bash docker run -d --name fake-gcs-server -p 4443:4443 \ fsouza/fake-gcs-server -scheme http -port 4443 ``` 2. Create a bucket: ```bash curl -X POST http://localhost:4443/storage/v1/b?project=my-project-id \ -H "Content-Type: application/json" \ -d '{"name":"my-bucket"}' ``` 3. Set environment variable in your `configs/.env` file: ```bash STORAGE_EMULATOR_HOST=localhost:4443 ``` 4. Connect to GCS in your application: ```go package main import ( "gofr.dev/pkg/gofr" "gofr.dev/pkg/gofr/datasource/file/gcs" ) func main() { app := gofr.New() // Local setup with fake-gcs-server (uses STORAGE_EMULATOR_HOST) app.AddFileStore(gcs.New(&gcs.Config{ BucketName: "my-bucket", ProjectID: "my-project-id", })) app.Run() app.Run() } ``` **Production Setup:** For production, authenticate using one of these methods: ```go // Option 1: Using GOOGLE_APPLICATION_CREDENTIALS environment variable // Set: export GOOGLE_APPLICATION_CREDENTIALS=/path/to/credentials.json app.AddFileStore(gcs.New(&gcs.Config{ BucketName: "my-bucket", ProjectID: "my-project-id", })) // Option 2: Using CredentialsJSON directly credJSON, _ := os.ReadFile("gcs-credentials.json") app.AddFileStore(gcs.New(&gcs.Config{ BucketName: "my-bucket", CredentialsJSON: string(credJSON), ProjectID: "my-project-id", })) ``` > **Note:** > - When `STORAGE_EMULATOR_HOST` is set, the client automatically connects to the local emulator without authentication. > - For production, use either `GOOGLE_APPLICATION_CREDENTIALS` environment variable or `CredentialsJSON` config field > - Currently supports one bucket per file-store instance ### Azure File Storage as File-Store Azure File Storage provides fully managed file shares in the cloud. To use Azure File Storage with GoFr: ```go package main import ( "gofr.dev/pkg/gofr" "gofr.dev/pkg/gofr/datasource/file/azure" ) func main() { app := gofr.New() // Create Azure File Storage filesystem fs, err := azure.New(&azure.Config{ AccountName: "mystorageaccount", AccountKey: "myaccountkey", ShareName: "myshare", // Endpoint is optional, defaults to https://{AccountName}.file.core.windows.net // Endpoint: "https://custom-endpoint.file.core.windows.net", }) if err != nil { app.Logger().Fatalf("Failed to initialize Azure File Storage: %v", err) } app.AddFileStore(fs) app.Run() } ``` > **Note:** > - Azure File Storage uses file shares (similar to S3 buckets or GCS buckets) > - Authentication requires both `AccountName` and `AccountKey` > - The `Endpoint` field is optional and defaults to `https://{AccountName}.file.core.windows.net` > - Currently supports one file share per file-store instance > - The implementation automatically retries connection if the initial connection fails > - **Automatic parent directory creation**: When creating files in nested paths (e.g., `dir1/subdir/file.txt`), parent directories are automatically created, matching local filesystem behavior > - **Content type detection**: Content types are automatically detected based on file extensions (e.g., `.json` → `application/json`, `.txt` → `text/plain`) ## Cloud-Specific Operations Beyond the standard filesystem interface, some cloud storage providers support richer capabilities — setting file metadata on upload and generating secure, time-limited download URLs. These are available through the `CloudFileSystem` interface. > **Note:** These operations are currently supported only for **Google Cloud Storage (GCS)**. Other cloud providers may gain support in future releases. ### Checking Cloud Support Use `file.AsCloud()` to safely check whether the configured file store supports cloud-specific operations. This avoids a raw type assertion and returns a typed interface: ```go import "gofr.dev/pkg/gofr/datasource/file" cfs, ok := file.AsCloud(c.File) if !ok { return nil, file.ErrSignedURLsNotSupported } ``` ### Uploading a File with Metadata `CreateWithOptions` works like `Create` but lets you set a `Content-Type`, `Content-Disposition`, and arbitrary key-value metadata on the object at upload time: ```go f, err := cfs.CreateWithOptions(c, "reports/q1.csv", &file.FileOptions{ ContentType: "text/csv", ContentDisposition: `attachment; filename="q1.csv"`, Metadata: map[string]string{ "uploaded-by": "invoice-service", "report-quarter": "Q1-2026", }, }) if err != nil { return nil, err } defer f.Close() _, err = f.Write(csvData) ``` Setting `ContentDisposition` ensures browsers download the file as an attachment rather than attempting to render it inline. Custom `Metadata` fields are stored on the GCS object and visible in the GCS console and `gsutil` output. ### Generating a Signed URL `GenerateSignedURL` creates a time-limited, pre-authenticated URL that allows anyone with the link to download the file — no GCS credentials required on the client side: ```go url, err := cfs.GenerateSignedURL(c, "reports/q1.csv", 15*time.Minute, nil) if err != nil { return nil, err } return url, nil ``` Pass `FileOptions` as the last argument to override the `Content-Disposition` header that the signed URL serves — useful when the object was uploaded without a disposition header but you want the browser to treat it as a download: ```go url, err := cfs.GenerateSignedURL(c, "reports/q1.csv", 1*time.Hour, &file.FileOptions{ ContentDisposition: `attachment; filename="report.csv"`, }) ``` > **Note:** > - Signed URLs require the GCS service account to have the `iam.serviceAccounts.signBlob` IAM permission. > - The URL is pre-authenticated — anyone who has it can download the file until it expires. > - Expiry is measured from the moment `GenerateSignedURL` is called. > - `file.AsCloud` returns `(nil, false)` for local, FTP, and SFTP file stores — always check the `ok` result. ### Creating Directory To create a single directory ```go err := ctx.File.Mkdir("my_dir",os.ModePerm) ``` To create subdirectories as well ```go err := ctx.File.MkdirAll("my_dir/sub_dir", os.ModePerm) ``` ### Get current Directory ```go currentDir, err := ctx.File.Getwd() ``` ### Change current Directory To switch to parent directory ```go currentDir, err := ctx.File.Chdir("..") ``` To switch to another directory in same parent directory ```go currentDir, err := ctx.File.Chdir("../my_dir2") ``` To switch to a subfolder of the current directory ```go currentDir, err := ctx.File.Chdir("sub_dir") ``` > Note: This method attempts to change the directory, but S3's flat structure and fixed bucket > make this operation inapplicable. Similarly, GCS uses a flat structure where directories are simulated through object prefixes. > Azure File Storage supports directory operations natively, so `Chdir` works as expected. ### Read a Directory The ReadDir function reads the specified directory and returns a sorted list of its entries as FileInfo objects. Each FileInfo object provides access to its associated methods, eliminating the need for additional stat calls. If an error occurs during the read operation, ReadDir returns the successfully read entries up to the point of the error along with the error itself. Passing "." as the directory argument returns the entries for the current directory. ```go entries, err := ctx.File.ReadDir("../testdir") for _, entry := range entries { entryType := "File" if entry.IsDir() { entryType = "Dir" } fmt.Printf("%v: %v Size: %v Last Modified Time : %v\n", entryType, entry.Name(), entry.Size(), entry.ModTime()) } ``` > Note: In S3 and GCS, directories are represented as prefixes of file keys/object names. This method retrieves file > entries only from the immediate level within the specified directory. Azure File Storage supports native directory > structures, so `ReadDir` works with actual directories. ### Creating and Save a File with Content ```go file, _ := ctx.File.Create("my_file.text") _, _ = file.Write([]byte("Hello World!")) // Closes and saves the file. file.Close() ``` > **Note for Azure File Storage:** > - Files can be created in nested directories (e.g., `dir1/subdir/file.txt`). Parent directories are automatically created if they don't exist > - Content types are automatically detected based on file extensions (e.g., `.json`, `.txt`, `.csv`, `.xml`, `.html`, `.pdf`) > - This behavior matches local filesystem operations for consistency ### Reading file as CSV/JSON/TEXT GoFr support reading CSV/JSON/TEXT files line by line. ```go reader, err := file.ReadAll() for reader.Next() { var b string // For reading CSV/TEXT files user need to pass pointer to string to SCAN. // In case of JSON user should pass structs with JSON tags as defined in encoding/json. err = reader.Scan(&b) fmt.Println(b) } ``` ### Opening and Reading Content from a File To open a file with default settings, use the `Open` command, which provides read and seek permissions only. For write permissions, use `OpenFile` with the appropriate file modes. > Note: In FTP, file permissions are not differentiated; both `Open` and `OpenFile` allow all file operations regardless of specified permissions. ```go csvFile, _ := ctx.File.Open("my_file.csv") b := make([]byte, 200) // Read reads up to len(b) bytes into b. _, _ = file.Read(b) csvFile.Close() csvFile, err = ctx.File.OpenFile("my_file.csv", os.O_RDWR, os.ModePerm) // WriteAt writes the buffer content at the specified offset. _, err = csvFile.WriteAt([]byte("test content"), 4) if err != nil { return nil, err } ``` ### Getting Information of the file/directory Stat retrieves details of a file or directory, including its name, size, last modified time, and type (such as whether it is a file or folder) ```go file, _ := ctx.File.Stat("my_file.text") entryType := "File" if entry.IsDir() { entryType = "Dir" } fmt.Printf("%v: %v Size: %v Last Modified Time : %v\n", entryType, entry.Name(), entry.Size(), entry.ModTime()) ``` > Note: In S3 and GCS: > > - Names without a file extension are treated as directories by default. > - Names starting with "0" are interpreted as binary files, with the "0" prefix removed (S3 specific behavior). > > For directories, the method calculates the total size of all contained objects and returns the most recent modification time. For files, it directly returns the file's size and last modified time. > > Azure File Storage supports native file and directory structures, so `Stat` returns accurate metadata for both files and directories. ### Rename/Move a File To rename or move a file, provide source and destination fields. In case of renaming a file provide current name as source, new_name in destination. To move file from one location to another provide current location as source and new location as destination. ```go err := ctx.File.Rename("old_name.text", "new_name.text") ``` ### Deleting Files `Remove` deletes a single file > Note: Currently, the S3 package supports the deletion of unversioned files from general-purpose buckets only. Directory buckets and versioned files are not supported for deletion by this method. GCS supports deletion of both files and empty directories. Azure File Storage supports deletion of both files and empty directories. ```go err := ctx.File.Remove("my_dir") ``` The `RemoveAll` command deletes all subdirectories as well. If you delete the current working directory, such as "../currentDir", the working directory will be reset to its parent directory. > Note: In S3, RemoveAll only supports deleting directories and will return an error if a file path (as indicated by a file extension) is provided for S3. > GCS and Azure File Storage handle both files and directories. ```go err := ctx.File.RemoveAll("my_dir/my_text") ``` > GoFr supports relative paths, allowing locations to be referenced relative to the current working directory. However, since S3 and GCS use > a flat file structure, all methods require a full path relative to the bucket. Azure File Storage supports native directory structures, > so relative paths work as expected with directory navigation. > Errors have been skipped in the example to focus on the core logic, it is recommended to handle all the errors. --- ## https://gofr.dev/docs/advanced-guide/http-communication # Inter-Service HTTP Calls GoFr promotes microservice architecture and to facilitate the same, it provides the support to initialize HTTP services at application level using `AddHTTPService()` method. Support for inter-service HTTP calls provide the following benefits: 1. Access to the methods from container - GET, PUT, POST, PATCH, DELETE. 2. Logs and traces for the request. 3. {% new-tab-link newtab=false title="Circuit breaking" href="/docs/advanced-guide/circuit-breaker" /%} for enhanced resilience and fault tolerance. 4. {% new-tab-link newtab=false title="Custom Health Check" href="/docs/advanced-guide/monitoring-service-health" /%} Endpoints ## Usage ### Registering a simple HTTP Service GoFr allows registering a new HTTP service using the application method `AddHTTPService()`. It takes in a service name and service address argument to register the dependent service at application level. Registration of multiple dependent services is quite easier, which is a common use case in a microservice architecture. > The services instances are maintained by the container. Other provided options can be added additionally to coat the basic HTTP client with features like circuit-breaker and custom health check and add to the functionality of the HTTP service. The design choice for this was made such as many options as required can be added and are order agnostic, i.e. the order of the options is not important. > Service names are to be kept unique to one service. ```go app.AddHTTPService(, ) ``` #### Example ```go package main import ( "gofr.dev/pkg/gofr" ) func main() { // Create a new application app := gofr.New() // register a payment service which is hosted at http://localhost:9000 app.AddHTTPService("payment", "http://localhost:9000") app.GET("/customer", Customer) // Run the application app.Run() } ``` ### Accessing HTTP Service in handler The HTTP service client is accessible anywhere from `gofr.Context` that gets passed on from the handler. Using the `GetHTTPService` method with the service name that was given at the time of registering the service, the client can be retrieved as shown below: ```go svc := ctx.GetHTTPService() ``` #### Available Methods The HTTP service client provides methods for making requests to downstream services: - `Get(ctx, path, queryParams)` - `Post(ctx, path, queryParams, body)` - `Put(ctx, path, queryParams, body)` - `Patch(ctx, path, queryParams, body)` - `Delete(ctx, path, body)` **For scenarios requiring custom header propagation (authentication, multi-tenancy, user identity propagation), use the `WithHeaders` variants:** - `GetWithHeaders(ctx, path, queryParams, headers)` - `PostWithHeaders(ctx, path, queryParams, body, headers)` - `PutWithHeaders(ctx, path, queryParams, body, headers)` - `PatchWithHeaders(ctx, path, queryParams, body, headers)` - `DeleteWithHeaders(ctx, path, body, headers)` ```go func Customer(ctx *gofr.Context) (any, error) { // Get the payment service client paymentSvc := ctx.GetHTTPService("payment") // Use the Get method to call the GET /user endpoint of payments service resp, err := paymentSvc.Get(ctx, "user", nil) if err != nil { return nil, err } defer resp.Body.Close() body, err := io.ReadAll(resp.Body) if err != nil { return nil, err } return string(body), nil } // For microservice patterns involving authentication (ex: JWT Token Forwarding), use WithHeaders methods to forward custom headers. func GatewayHandler(ctx *gofr.Context) (any, error) { authInfo := ctx.GetAuthInfo() claims := authInfo.GetClaims() userID, _ := claims.GetSubject() headers := map[string]string{ "X-User-ID": userID, } userSvc := ctx.GetHTTPService("user-service") resp, err := userSvc.GetWithHeaders(ctx, "api/user/profile", nil, headers) if err != nil { return nil, err } defer resp.Body.Close() body, err := io.ReadAll(resp.Body) if err != nil { return nil, err } return string(body), nil } ``` ### Additional Configurational Options GoFr provides its user with additional configurational options while registering HTTP service for communication. These are: - **ConnectionPoolConfig** - This option allows the user to configure HTTP connection pool settings to optimize performance for high-frequency requests. The default Go HTTP client has `MaxIdleConnsPerHost: 2`, which is often insufficient for microservices making frequent requests to the same host. This configuration allows customizing: - `MaxIdleConns`: Maximum idle connections across all hosts. If not explicitly set (0), a default of 100 will be used. - `MaxIdleConnsPerHost`: Maximum idle connections per host (critical for performance). If set to 0, Go's DefaultMaxIdleConnsPerHost (2) will be used. Negative values will cause validation error. - `IdleConnTimeout`: How long to keep idle connections alive. If not explicitly set (0), a default of 90 seconds will be used. **Important**: `ConnectionPoolConfig` must be applied **first** when using multiple options, as it needs access to the underlying HTTP client transport. - **APIKeyConfig** - This option allows the user to set the `API-Key` Based authentication as the default auth for downstream HTTP Service. - **BasicAuthConfig** - This option allows the user to set basic auth (username and password) as the default auth for downstream HTTP Service. **Important:** The password must be base64 encoded in your configuration/environment variables. GoFr will decode it internally before creating the Authorization header. **Example:** ```bash # Generate base64 encoded password echo -n "your-password" | base64 # Output: eW91ci1wYXNzd29yZA== ``` - **OAuthConfig** - This option allows the user to add `OAuth` as default auth for downstream HTTP Service. - **CircuitBreakerConfig** - This option allows the user to configure the GoFr Circuit Breaker's `threshold` and `interval` for the failing downstream HTTP Service calls. If the failing calls exceeds the threshold the circuit breaker will automatically be enabled. - **DefaultHeaders** - This option allows the user to set some default headers that will be propagated to the downstream HTTP Service every time it is being called. - **HealthConfig** - This option allows the user to add the `HealthEndpoint` along with `Timeout` to enable and perform the timely health checks for downstream HTTP Service. - **RetryConfig** - This option allows the user to add the maximum number of retry count before returning error if any downstream HTTP Service fails. Retries are triggered for network errors and status codes **> 500** (e.g., 503 Service Unavailable). HTTP 500 is not retried. - **RateLimiterConfig** - This option allows the user to configure rate limiting for downstream service calls using token bucket algorithm. It controls the request rate to prevent overwhelming dependent services and supports both in-memory and Redis-based implementations. **Rate Limiter Store: Customization** GoFr allows you to use a custom rate limiter store by implementing the RateLimiterStore interface. This enables integration with any backend (e.g., Redis, database, or custom logic) **Interface:** ```go type RateLimiterStore interface { Allow(ctx context.Context, key string, config RateLimiterConfig) (allowed bool, retryAfter time.Duration, err error) StartCleanup(ctx context.Context) StopCleanup() } ``` #### Usage: ```go rc := redis.NewClient(a.Config, a.Logger(), a.Metrics()) a.AddHTTPService("cat-facts", "https://catfact.ninja", // ConnectionPoolConfig must be applied FIRST &service.ConnectionPoolConfig{ MaxIdleConns: 100, // Maximum idle connections across all hosts MaxIdleConnsPerHost: 20, // Maximum idle connections per host (increased from default 2) IdleConnTimeout: 90 * time.Second, // Keep connections alive for 90 seconds }, // Other options can follow in any order service.NewAPIKeyConfig("some-random-key"), service.NewBasicAuthConfig("username", "password"), &service.CircuitBreakerConfig{ Threshold: 4, Interval: 1 * time.Second, }, &service.DefaultHeaders{Headers: map[string]string{"key": "value"}}, &service.HealthConfig{ HealthEndpoint: "breeds", }, service.NewOAuthConfig("clientID", "clientSecret", "https://tokenurl.com", nil, nil, 0), &service.RetryConfig{ MaxRetries: 5, }, &service.RateLimiterConfig{ Requests: 5, Window: time.Minute, Burst: 10, Store: service.NewRedisRateLimiterStore(rc), // Skip this field to use in-memory store }, ) ``` **Best Practices:** - For distributed systems: It is strongly recommended to use Redis-based store (`NewRedisRateLimiterStore`) to ensure consistent rate limiting across multiple instances of your application. - For single-instance applications: The default in-memory store (`NewLocalRateLimiterStore`) is sufficient and provides better performance. - Rate configuration: Set Burst higher than Requests to allow short traffic bursts while maintaining average rate limits. ## Metrics GoFr publishes the following metrics for HTTP service communication: - `app_http_retry_count`: Total number of retry events. (labels: `service`) - `app_http_circuit_breaker_state`: Current state of the circuit breaker (0 for Closed, 1 for Open). (labels: `service`) - `app_http_service_response`: Response time of HTTP service requests in seconds (histogram). (labels: `service`, `path`, `method`, `status`) ## Related production guides - **Distributed Tracing**: [Trace inter-service HTTP calls end-to-end](/docs/guides/distributed-tracing) — context propagation across GoFr's HTTP client. --- ## https://gofr.dev/docs/advanced-guide/injecting-databases-drivers # Injecting Database Drivers Keeping in mind the size of the framework in the final build, it felt counter-productive to keep all the database drivers within the framework itself. Keeping only the most used MySQL and Redis within the framework, users can now inject databases in the server that satisfies the base interface defined by GoFr. This helps in reducing the build size and in turn build time as unnecessary database drivers are not being compiled and added to the build. > We are planning to provide custom drivers for most common databases, and is in the pipeline for upcoming releases! ## Supported Databases {% table %} - Datasource - Health-Check - Logs - Metrics - Traces - Version-Migrations --- - MySQL - ✅ - ✅ - ✅ - ✅ - ✅ --- - REDIS - ✅ - ✅ - ✅ - ✅ - ✅ --- - PostgreSQL - ✅ - ✅ - ✅ - ✅ - ✅ --- - ArangoDB - ✅ - ✅ - ✅ - ✅ - ✅ --- - BadgerDB - ✅ - ✅ - ✅ - ✅ - --- - Cassandra - ✅ - ✅ - ✅ - ✅ - ✅ --- - ClickHouse - - ✅ - ✅ - ✅ - ✅ --- - DGraph - ✅ - ✅ - ✅ - ✅ - --- - MongoDB - ✅ - ✅ - ✅ - ✅ - ✅ --- - NATS KV - ✅ - ✅ - ✅ - ✅ - --- - OpenTSDB - ✅ - ✅ - - ✅ - --- - ScyllaDB - ✅ - ✅ - ✅ - ✅ - --- - Solr - - ✅ - ✅ - ✅ - --- - SQLite - ✅ - ✅ - ✅ - ✅ - ✅ --- - SurrealDB - ✅ - ✅ - - ✅ - --- --- ## https://gofr.dev/docs/advanced-guide/key-value-store # Key Value Store A key-value store is a type of NoSQL database that uses a simple data model: each item is stored as a pair consisting of a unique key and a value. This simplicity offers high performance and scalability, making key-value stores ideal for applications requiring fast and efficient data retrieval and storage. GoFr supports multiple key-value stores including BadgerDB, NATS-KV, and DynamoDB. Support for other key-value stores will be added in the future. Keeping in mind the size of the application in the final build, it felt counter-productive to keep the drivers within the framework itself. GoFr provide the following functionalities for its key-value store. ```go type KVStore interface { Get(ctx context.Context, key string) (string, error) Set(ctx context.Context, key, value string) error Delete(ctx context.Context, key string) error } ``` ## BadgerDB GoFr supports injecting BadgerDB that supports the following interface. Any driver that implements the interface can be added using `app.AddKVStore()` method, and user's can use BadgerDB across application with `gofr.Context`. User's can easily inject a driver that supports this interface, this provides usability without compromising the extensibility to use multiple databases. Import the gofr's external driver for BadgerDB: ```go go get gofr.dev/pkg/gofr/datasource/kv-store/badger ``` ### Example ```go package main import ( "fmt" "gofr.dev/pkg/gofr" "gofr.dev/pkg/gofr/datasource/kv-store/badger" ) type User struct { ID string Name string Age string } func main() { app := gofr.New() app.AddKVStore(badger.New(badger.Configs{DirPath: "badger-example"})) app.POST("/user", Post) app.GET("/user", Get) app.DELETE("/user", Delete) app.Run() } func Post(ctx *gofr.Context) (any, error) { err := ctx.KVStore.Set(ctx, "name", "gofr") if err != nil { return nil, err } return "Insertion to Key Value Store Successful", nil } func Get(ctx *gofr.Context) (any, error) { value, err := ctx.KVStore.Get(ctx, "name") if err != nil { return nil, err } return value, nil } func Delete(ctx *gofr.Context) (any, error) { err := ctx.KVStore.Delete(ctx, "name") if err != nil { return nil, err } return fmt.Sprintf("Deleted Successfully key %v from Key-Value Store", "name"), nil } ``` ## NATS-KV GoFr supports injecting NATS-KV that supports the above KVStore interface. Any driver that implements the interface can be added using `app.AddKVStore()` method, and user's can use NATS-KV across application with `gofr.Context`. User's can easily inject a driver that supports this interface, this provides usability without compromising the extensibility to use multiple databases. Import the gofr's external driver for NATS-KV: ```go go get gofr.dev/pkg/gofr/datasource/kv-store/nats ``` ### Example ```go package main import ( "encoding/json" "fmt" "github.com/google/uuid" "gofr.dev/pkg/gofr" "gofr.dev/pkg/gofr/datasource/kv-store/nats" "gofr.dev/pkg/gofr/http" ) type Person struct { ID string `json:"id,omitempty"` Name string `json:"name"` Age int `json:"age"` Email string `json:"email,omitempty"` } func main() { app := gofr.New() app.AddKVStore(nats.New(nats.Configs{ Server: "nats://localhost:4222", Bucket: "persons", })) app.POST("/person", CreatePerson) app.GET("/person/{id}", GetPerson) app.PUT("/person/{id}", UpdatePerson) app.DELETE("/person/{id}", DeletePerson) app.Run() } func CreatePerson(ctx *gofr.Context) (any, error) { var person Person if err := ctx.Bind(&person); err != nil { return nil, http.ErrorInvalidParam{Params: []string{"body"}} } person.ID = uuid.New().String() personData, err := json.Marshal(person) if err != nil { return nil, fmt.Errorf("failed to serialize person") } if err := ctx.KVStore.Set(ctx, person.ID, string(personData)); err != nil { return nil, err } return person, nil } func GetPerson(ctx *gofr.Context) (any, error) { id := ctx.PathParam("id") if id == "" { return nil, http.ErrorInvalidParam{Params: []string{"id"}} } value, err := ctx.KVStore.Get(ctx, id) if err != nil { return nil, fmt.Errorf("person not found") } var person Person if err := json.Unmarshal([]byte(value), &person); err != nil { return nil, fmt.Errorf("failed to parse person data") } return person, nil } func UpdatePerson(ctx *gofr.Context) (any, error) { id := ctx.PathParam("id") if id == "" { return nil, http.ErrorInvalidParam{Params: []string{"id"}} } var person Person if err := ctx.Bind(&person); err != nil { return nil, http.ErrorInvalidParam{Params: []string{"body"}} } person.ID = id personData, err := json.Marshal(person) if err != nil { return nil, fmt.Errorf("failed to serialize person") } if err := ctx.KVStore.Set(ctx, id, string(personData)); err != nil { return nil, err } return person, nil } func DeletePerson(ctx *gofr.Context) (any, error) { id := ctx.PathParam("id") if id == "" { return nil, http.ErrorInvalidParam{Params: []string{"id"}} } if err := ctx.KVStore.Delete(ctx, id); err != nil { return nil, fmt.Errorf("person not found") } return map[string]string{"message": "Person deleted successfully"}, nil } ``` ## DynamoDB GoFr supports injecting DynamoDB as a key-value store that implements the standard KVStore interface. Any driver that implements the interface can be added using `app.AddKVStore()` method, and users can use DynamoDB across application with `gofr.Context`. DynamoDB is a fully managed NoSQL database service that provides fast and predictable performance with seamless scalability. It's ideal for applications that need consistent, single-digit millisecond latency at any scale. Import the gofr's external driver for DynamoDB: ```shell go get gofr.dev/pkg/gofr/datasource/kv-store/dynamodb@latest ``` ### Configuration ```go type Configs struct { Table string // DynamoDB table name Region string // AWS region (e.g., "us-east-1") Endpoint string // Leave empty for real AWS; set for local DynamoDB PartitionKeyName string // Default is "pk" if not specified } ``` ### Local Development Setup For local development, you can use DynamoDB Local with Docker: ```bash # Start DynamoDB Local docker run --name dynamodb-local -d -p 8000:8000 amazon/dynamodb-local # Create a table aws dynamodb create-table \ --table-name gofr-kv-store \ --attribute-definitions AttributeName=pk,AttributeType=S \ --key-schema AttributeName=pk,KeyType=HASH \ --billing-mode PAY_PER_REQUEST \ --endpoint-url http://localhost:8000 \ --region us-east-1 ``` ### JSON Helper Functions The DynamoDB package provides helper functions for JSON serialization/deserialization that work with the standard KVStore interface: ```go // ToJSON converts any struct to JSON string func ToJSON(value any) (string, error) // FromJSON converts JSON string to struct func FromJSON(jsonData string, dest any) error ``` ### Example ```go package main import ( "fmt" "time" "gofr.dev/pkg/gofr" "gofr.dev/pkg/gofr/datasource/kv-store/dynamodb" ) type User struct { ID string `json:"id"` Name string `json:"name"` Email string `json:"email"` CreatedAt time.Time `json:"created_at"` } func main() { app := gofr.New() // Create DynamoDB client with configuration db := dynamodb.New(dynamodb.Configs{ Table: "gofr-kv-store", Region: "us-east-1", Endpoint: "http://localhost:8000", // For local DynamoDB PartitionKeyName: "pk", }) // Connect to DynamoDB db.Connect() // Inject the DynamoDB into gofr app.AddKVStore(db) app.POST("/user", CreateUser) app.GET("/user/{id}", GetUser) app.PUT("/user/{id}", UpdateUser) app.DELETE("/user/{id}", DeleteUser) app.Run() } func CreateUser(ctx *gofr.Context) (any, error) { var user User if err := ctx.Bind(&user); err != nil { return nil, err } user.ID = fmt.Sprintf("user_%d", time.Now().UnixNano()) user.CreatedAt = time.Now() // Convert struct to JSON string using helper function userData, err := dynamodb.ToJSON(user) if err != nil { return nil, fmt.Errorf("failed to serialize user: %w", err) } // Store using standard KVStore interface if err := ctx.KVStore.Set(ctx, user.ID, userData); err != nil { return nil, fmt.Errorf("failed to create user: %w", err) } return user, nil } func GetUser(ctx *gofr.Context) (any, error) { id := ctx.PathParam("id") if id == "" { return nil, fmt.Errorf("user ID is required") } // Get JSON string from KVStore userData, err := ctx.KVStore.Get(ctx, id) if err != nil { return nil, fmt.Errorf("user not found: %w", err) } // Convert JSON string to struct using helper function var user User if err := dynamodb.FromJSON(userData, &user); err != nil { return nil, fmt.Errorf("failed to parse user data: %w", err) } return user, nil } func UpdateUser(ctx *gofr.Context) (any, error) { id := ctx.PathParam("id") if id == "" { return nil, fmt.Errorf("user ID is required") } var user User if err := ctx.Bind(&user); err != nil { return nil, err } user.ID = id // Convert struct to JSON string using helper function userData, err := dynamodb.ToJSON(user) if err != nil { return nil, fmt.Errorf("failed to serialize user: %w", err) } // Update in DynamoDB using standard KVStore interface if err := ctx.KVStore.Set(ctx, id, userData); err != nil { return nil, fmt.Errorf("failed to update user: %w", err) } return user, nil } func DeleteUser(ctx *gofr.Context) (any, error) { id := ctx.PathParam("id") if id == "" { return nil, fmt.Errorf("user ID is required") } // Delete from DynamoDB using standard KVStore interface if err := ctx.KVStore.Delete(ctx, id); err != nil { return nil, fmt.Errorf("failed to delete user: %w", err) } return map[string]string{"message": "User deleted successfully"}, nil } ``` ### Production Configuration For production use, remove the `Endpoint` field to connect to real AWS DynamoDB: ```go db := dynamodb.New(dynamodb.Configs{ Table: "gofr-kv-store", Region: "us-east-1", // Endpoint: "", // Remove this for production PartitionKeyName: "pk", }) ``` ### AWS Credentials For production, ensure your AWS credentials are configured through: - AWS IAM roles (recommended for EC2/ECS/Lambda) - AWS credentials file (`~/.aws/credentials`) - Environment variables (`AWS_ACCESS_KEY_ID`, `AWS_SECRET_ACCESS_KEY`) --- ## https://gofr.dev/docs/advanced-guide/middlewares # Middleware in GoFr Middleware allows you intercepting and manipulating HTTP requests and responses flowing through your application's router. Middlewares can perform tasks such as authentication, authorization, caching etc. before or after the request reaches your application's handler. ## CORS Middleware in GoFr GoFr includes built-in CORS (Cross-Origin Resource Sharing) middleware to handle CORS-related headers. This middleware allows you to control access to your API from different origins. It automatically adds the necessary headers to responses, allowing or restricting cross-origin requests. User can also override the default response headers sent by GoFr by providing the suitable CORS configs. The CORS middleware provides the following overridable configs: - `ACCESS_CONTROL_ALLOW_ORIGIN`: Set the allowed origin(s) for cross-origin requests. By default, it allows all origins (`*`). Supports comma-separated values for multiple origins (e.g., `https://app.example.com,https://admin.example.com`). When multiple origins are configured, the middleware dynamically matches the request's `Origin` header and responds with the matched origin, adding a `Vary: Origin` header for correct HTTP caching. - `ACCESS_CONTROL_ALLOW_HEADERS`: Define the allowed request headers (e.g., Authorization, Content-Type). - `ACCESS_CONTROL_ALLOW_CREDENTIALS`: Set to true to allow credentials (cookies, HTTP authentication) in requests. - `ACCESS_CONTROL_EXPOSE_HEADERS`: Specify additional headers exposed to the client. - `ACCESS_CONTROL_MAX_AGE`: Set the maximum time (in seconds) for preflight request caching. > Note: GoFr automatically interprets the registered route methods and based on that sets the value of `ACCESS_CONTROL_ALLOW_METHODS` ## Adding Custom Middleware in GoFr By adding custom middleware to your GoFr application, user can easily extend its functionality and implement cross-cutting concerns in a modular and reusable way. User can use the `UseMiddleware` or `UseMiddlewareWithContainer` method on your GoFr application instance to register your custom middleware. ### Using UseMiddleware method for Custom Middleware The UseMiddleware method is ideal for simple middleware that doesn't need direct access to the application's container. #### Example: ```go import ( "net/http" gofrHTTP "gofr.dev/pkg/gofr/http" ) // Define your custom middleware function func customMiddleware() gofrHTTP.Middleware { return func(inner http.Handler) http.Handler { return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { // Your custom logic here // For example, logging, authentication, etc. // Call the next handler in the chain inner.ServeHTTP(w, r) }) } } func main() { // Create a new instance of your GoFr application app := gofr.New() // Add your custom middleware to the application app.UseMiddleware(customMiddleware()) // Define your application routes and handlers // ... // Run your GoFr application app.Run() } ``` ## Rate Limiter Middleware in GoFr GoFr provides a built-in rate limiter middleware to protect your API from abuse and ensure fair resource distribution. It uses a token bucket algorithm for smooth rate limiting with configurable burst capacity. ### Features - **Token Bucket Algorithm**: Allows smooth rate limiting with configurable burst capacity - **Per-IP Rate Limiting**: Each client IP gets its own rate limit (configurable) - **Health Check Exemption**: `/.well-known/alive` and `/.well-known/health` endpoints are automatically exempt - **Prometheus Metrics**: Track rate limit violations via `app_http_rate_limit_exceeded_total` counter - **429 Status Code**: Returns standard HTTP 429 (Too Many Requests) when limit is exceeded ### Configuration ```go import ( "gofr.dev/pkg/gofr" "gofr.dev/pkg/gofr/http/middleware" ) func main() { app := gofr.New() // Configure rate limiter rateLimiterConfig := middleware.RateLimiterConfig{ RequestsPerSecond: 5, // Average requests per second Burst: 10, // Maximum burst size PerIP: true, // Enable per-IP limiting } // Add rate limiter middleware app.UseMiddleware(middleware.RateLimiter(rateLimiterConfig, app.Metrics())) app.GET("/api/resource", handler) app.Run() } ``` ### Parameters - `RequestsPerSecond`: Average number of requests allowed per second - `Burst`: Maximum number of requests that can be made in a burst (allows temporary spikes) - `PerIP`: Set to `true` for per-IP limiting (recommended) or `false` for global rate limit across all clients - `TrustedProxies`: *(Optional)* Set to `true` to trust `X-Forwarded-For` and `X-Real-IP` headers for IP extraction. Only enable when behind a trusted reverse proxy. > **Security Warning**: Only set `TrustedProxies: true` if your application is behind a trusted reverse proxy (nginx, ALB, etc.). > Without a trusted proxy, clients can spoof headers to bypass rate limits. --- ## https://gofr.dev/docs/advanced-guide/monitoring-service-health # Monitoring Service Health Health check in microservices refers to a mechanism or process implemented within each service to assess its operational status and readiness to handle requests. It involves regularly querying the service to determine if it is functioning correctly, typically by evaluating its responsiveness and ability to perform essential tasks. Health checks play a critical role in ensuring service availability, detecting failures, preventing cascading issues, and facilitating effective traffic routing in distributed systems. ## GoFr by default registers two endpoints which are: ### 1. Aliveness - /.well-known/alive It is an endpoint which returns the following response with a 200 status code, when the service is UP. ```json { "data": { "status": "UP" } } ``` It is also used when state of {% new-tab-link newtab=false title="circuit breaker" href="/docs/advanced-guide/circuit-breaker" /%} is open. To override this endpoint, pass the following option while registering HTTP Service: ```go &service.HealthConfig{ HealthEndpoint: "breeds", } ``` ### 2. Health-Check - /.well-known/health It is an endpoint which returns whether the service is UP or DOWN along with stats, host, status about the dependent datasources and services. Sample response of how it appears when all the services, and connected data sources are UP: ```json { "data": { "anotherService": { "status": "UP", "details": { "host": "localhost:9000" } }, "redis": { "status": "UP", "details": { "host": "localhost:2002", "stats": { "active_defrag_hits": "0", "active_defrag_key_hits": "0", "active_defrag_key_misses": "0", "active_defrag_misses": "0", "current_active_defrag_time": "0", "current_eviction_exceeded_time": "0", "dump_payload_sanitizations": "0", "evicted_clients": "0", "evicted_keys": "0", "expire_cycle_cpu_milliseconds": "1", "expired_keys": "0", "expired_stale_perc": "0.00", "expired_time_cap_reached_count": "0", "instantaneous_input_kbps": "0.00", "instantaneous_input_repl_kbps": "0.00", "instantaneous_ops_per_sec": "0", "instantaneous_output_kbps": "0.00", "instantaneous_output_repl_kbps": "0.00", "io_threaded_reads_processed": "0", "io_threaded_writes_processed": "0", "keyspace_hits": "0", "keyspace_misses": "0", "latest_fork_usec": "0", "migrate_cached_sockets": "0", "pubsub_channels": "0", "pubsub_patterns": "0", "pubsubshard_channels": "0", "rejected_connections": "0", "reply_buffer_expands": "0", "reply_buffer_shrinks": "1", "slave_expires_tracked_keys": "0", "sync_full": "0", "sync_partial_err": "0", "sync_partial_ok": "0", "total_active_defrag_time": "0", "total_commands_processed": "2", "total_connections_received": "1", "total_error_replies": "2", "total_eviction_exceeded_time": "0", "total_forks": "0", "total_net_input_bytes": "183", "total_net_output_bytes": "257", "total_net_repl_input_bytes": "0", "total_net_repl_output_bytes": "0", "total_reads_processed": "5", "total_writes_processed": "4", "tracking_total_items": "0", "tracking_total_keys": "0", "tracking_total_prefixes": "0", "unexpected_error_replies": "0" } } }, "sql": { "status": "UP", "details": { "host": "localhost:2001/test", "stats": { "maxOpenConnections": 0, "openConnections": 1, "inUse": 0, "idle": 1, "waitCount": 0, "waitDuration": 0, "maxIdleClosed": 0, "maxIdleTimeClosed": 0, "maxLifetimeClosed": 0 } } } } } ``` ## Related production guides - **Deploying to Kubernetes**: [Wire `/.well-known/alive` into liveness/readiness probes](/docs/guides/deploying-to-kubernetes) — make Kubernetes act on the health information GoFr exposes. --- ## https://gofr.dev/docs/advanced-guide/overriding-default # Overriding Default GoFr allows overriding default behavior of its features. ## Raw response format GoFr by default wraps a handler's return value and assigns it to the `data` field in a response. ### Example ```go package main import "gofr.dev/pkg/gofr" type user struct { ID int `json:"id"` Name string `json:"name"` } func main() { app := gofr.New() app.GET("/users", func(ctx *gofr.Context) (any, error) { users := []user{{ID: 1, Name: "Daria"}, {ID: 2, Name: "Ihor"}} return users, nil }) app.Run() } ``` Response example: ```json { "data": [ { "id": 1, "name": "Daria" }, { "id": 2, "name": "Ihor" } ] } ``` If you want to have a raw response structure - wrap it in `response.Raw`: ```go app.GET("/users", func(ctx *gofr.Context) (any, error) { users := []user{{ID: 1, Name: "Daria"}, {ID: 2, Name: "Ihor"}} return response.Raw{Data: users}, nil }) ``` Response example: ```json [ { "id": 1, "name": "Daria" }, { "id": 2, "name": "Ihor" } ] ``` ### XML responses If you need to respond with XML without JSON encoding, return `response.XML`. It bypasses JSON encoding just like `response.File` or `response.Template` and writes the bytes directly to the client. The `ContentType` defaults to `application/xml` but can be overridden. ```go app.GET("/legacy/xml", func(ctx *gofr.Context) (any, error) { payload := []byte(`Hello`) return response.XML{Content: payload}, nil }) ``` ```xml Hello ``` ## Rendering Templates GoFr makes it easy to render HTML and HTMX templates directly from your handlers using the response.Template type. By convention, all template files—whether HTML or HTMX—should be placed inside a templates directory located at the root of your project. ### Example ```go package main import ( "gofr.dev/pkg/gofr" "gofr.dev/pkg/gofr/http/response" ) func main() { app := gofr.New() app.GET("/list", listHandler) app.AddStaticFiles("/", "./static") app.Run() } type Todo struct { Title string Done bool } type TodoPageData struct { PageTitle string Todos []Todo } func listHandler(ctx *gofr.Context) (any, error) { // Get data from somewhere data := TodoPageData{ PageTitle: "My TODO list", Todos: []Todo{ {Title: "Expand on Gofr documentation ", Done: false}, {Title: "Add more examples", Done: true}, {Title: "Write some articles", Done: false}, }, } return response.Template{Data: data, Name: "todo.html"}, nil } ``` ## HTTP Redirects GoFr allows redirecting HTTP requests to other URLs using the `response.Redirect` type. ### Example ```go package main import ( "gofr.dev/pkg/gofr" "gofr.dev/pkg/gofr/http/response" ) func main() { app := gofr.New() app.GET("/old-page", func(ctx *gofr.Context) (any, error) { // Redirect to a new URL return response.Redirect{URL: "https://example.com/new-page"}, nil }) app.Run() } ``` In GoFr, the following HTTP methods can be redirected, along with their corresponding status codes: - **GET (302 Found)**: It is safe to redirect because the request remains a GET after the redirect. - **POST (303 See Other)**: The browser converts the POST request to a GET on redirect. - **PUT (303 See Other)**: The browser converts the PUT request to a GET on redirect. - **PATCH (303 See Other)**: The browser converts the PATCH request to a GET on redirect. - **DELETE (302 Found)**: This is a temporary redirect, but method handling is ambiguous, as most browsers historically convert the DELETE request into a GET. ## Favicon.ico By default, GoFr loads its own `favicon.ico` present in root directory for an application. To override `favicon.ico` user can place its custom icon in the **static** directory of its application. > [!NOTE] > The custom favicon should also be named as `favicon.ico` in the static directory of application. --- ## https://gofr.dev/docs/advanced-guide/publishing-custom-metrics # Publishing Custom Metrics GoFr publishes some {% new-tab-link newtab=false title="default metrics" href="/docs/quick-start/observability" /%}. GoFr can handle multiple different metrics concurrently, each uniquely identified by its name during initialization. It supports the following {% new-tab-link title="metrics" href="https://opentelemetry.io/docs/specs/otel/metrics/" /%} types in Prometheus format: 1. `Counter` 2. `UpDownCounter` 3. `Histogram` 4. `Gauge` If any custom metric is required, it can be created by using custom metrics as shown below: ## Usage ## 1. Counter Metrics Counter is a {% new-tab-link title="synchronous Instrument" href="https://opentelemetry.io/docs/specs/otel/metrics/api/#synchronous-instrument-api" /%} which supports non-negative increments. ### Usage ```go package main import ( "gofr.dev/pkg/gofr" ) func main() { // initialize gofr object app := gofr.New() app.Metrics().NewCounter("transaction_success", "used to track the count of successful transactions") app.POST("/transaction", func(ctx *gofr.Context) (any, error) { ctx.Metrics().IncrementCounter(ctx, "transaction_success") return "Transaction Successful", nil }) app.Run() } ``` ## 2. UpDown Counter Metrics `UpDownCounter` is a {% new-tab-link title="synchronous Instrument" href="https://opentelemetry.io/docs/specs/otel/metrics/api/#synchronous-instrument-api" /%} which supports increments and decrements. Note: If the value is monotonically increasing, use Counter instead. ### Usage ```go package main import ( "gofr.dev/pkg/gofr" ) func main() { // initialize gofr object app := gofr.New() app.Metrics().NewUpDownCounter("total_credit_day_sale", "used to track the total credit sales in a day") app.POST("/sale", func(ctx *gofr.Context) (any, error) { ctx.Metrics().DeltaUpDownCounter(ctx, "total_credit_day_sale", 1000) return "Sale Completed", nil }) app.Run() } ``` ## 3. Histogram Metrics Histogram is a {% new-tab-link title="synchronous Instrument" href="https://opentelemetry.io/docs/specs/otel/metrics/api/#synchronous-instrument-api" /%} which can be used to report arbitrary values that are likely to be statistically meaningful. It is intended for statistics such as histograms, summaries, and percentile. ### Usage ```go package main import ( "gofr.dev/pkg/gofr" ) func main() { // initialize gofr object app := gofr.New() app.Metrics().NewHistogram("transaction_time", "used to track the time taken by a transaction", 5, 10, 15, 20, 25, 35) app.POST("/transaction", func(ctx *gofr.Context) (any, error) { transactionStartTime := time.Now() // transaction logic tranTime := time.Now().Sub(transactionStartTime).Milliseconds() ctx.Metrics().RecordHistogram(ctx, "transaction_time", float64(tranTime)) return "Transaction Completed", nil }) app.Run() } ``` ## 4. Gauge Metrics Gauge is a {% new-tab-link title="synchronous Instrument" href="https://opentelemetry.io/docs/specs/otel/metrics/api/#synchronous-instrument-api" /%} which can be used to record non-additive value(s) when changes occur. ### Usage ```go package main import ( "gofr.dev/pkg/gofr" ) func main() { // initialize gofr object app := gofr.New() app.Metrics().NewGauge("product_stock", "used to track the number of products in stock") app.POST("/sale", func(ctx *gofr.Context) (any, error) { ctx.Metrics().SetGauge("product_stock", 10) return "Sale Completed", nil }) app.Run() } ``` ## Adding Labels to Custom Metrics GoFr leverages metrics support by enabling labels. Labels are a key feature in metrics that allows us to categorize and filter metrics based on relevant information. ### Understanding Labels Labels are key-value pairs attached to metrics. They provide additional context about the metric data. Common examples of labels include: - environment: (e.g., "production", "staging") - service: (e.g., "api-gateway", "database") - status: (e.g., "success", "failure") By adding labels, we can create different time series for the same metric based on the label values. This allows for more granular analysis and visualization in Grafana (or any other) dashboards. ### Additional Considerations - Prefer to keep the number of labels manageable to avoid overwhelming complexity. - Choose meaningful label names that clearly describe the data point. - Ensure consistency in label naming conventions across your application. By effectively using labels in GoFr, we can enrich your custom metrics and gain deeper insights into your application's performance and behavior. ### Usage: Labels are added while populating the data for metrics, by passing them as arguments (comma separated key-value pairs) in the GoFr's methods (namely: `IncrementCounter`, `DeltaUpDownCounter`, `RecordHistogram`, `SetGauge`). Example: `c.Metrics().IncrementCounter(c, "metric-name", "metric-value", "label-1", "value-1", "label-2", "value-2")` ```go package main import ( "gofr.dev/pkg/gofr" ) func main() { // Initialize gofr object a := gofr.New() // Add custom metrics a.Metrics().NewUpDownCounter("total_credit_day_sale", "used to track the total credit sales in a day") // Add all the routes a.POST("/sale", SaleHandler) a.POST("/return", ReturnHandler) // Run the application a.Run() } func SaleHandler(c *gofr.Context) (any, error) { // logic to create sales c.Metrics().DeltaUpDownCounter(c, "total_credit_day_sale", 10, "sale_type", "credit", "product_type", "beverage") // Here "sale_type" & "product_type" are the labels and "credit" & "beverage" are the values return "Sale Successful", nil } func ReturnHandler(c *gofr.Context) (any, error) { // logic to create a sales return c.Metrics().DeltaUpDownCounter(c, "total_credit_day_sale", -5, "sale_type", "credit_return", "product_type", "dairy") return "Return Successful", nil } ``` **Good To Know** ```doc While registering a metrics 2 key pieces of information of required: - Name - Description When a registered metrics has to be used 3 key pieces of information are required: - Name - Value - A set of key-value pairs called tags or labels. A permutation of these key-value values provides the metric cardinality. Lower the cardinality, faster the query performance and lower the monitoring resource utilization. ``` > #### Check out the example on how to publish custom metrics in GoFr: [Visit GitHub](https://github.com/gofr-dev/gofr/blob/main/examples/using-custom-metrics/main.go) ## Related production guides - **Prometheus on Kubernetes**: [Scrape and store custom metrics in production](/docs/guides/production-prometheus-kubernetes) — ServiceMonitors, recording rules, retention. - **Horizontal Pod Autoscaler**: [Drive autoscaling from custom metrics](/docs/guides/horizontal-pod-autoscaler) — scale GoFr services on the metrics you publish. --- ## https://gofr.dev/docs/advanced-guide/rbac # Role-Based Access Control (RBAC) in GoFr Role-Based Access Control (RBAC) is a security mechanism that restricts access to resources based on user roles and permissions. GoFr provides a pure config-based RBAC middleware that supports multiple authentication methods, fine-grained permissions, and role inheritance. ## Overview - ✅ **Pure Config-Based** - All authorization rules in JSON/YAML files - ✅ **Two-Level Authorization Model** - Roles define permissions, endpoints require permissions (no direct role-to-route mapping) - ✅ **Multiple Auth Methods** - Header-based and JWT-based role extraction - ✅ **Permission-Based** - Fine-grained permissions - ✅ **Role Inheritance** - Roles inherit permissions from other roles ## Quick Start ```go package main import ( "gofr.dev/pkg/gofr" ) func main() { app := gofr.New() // Use default paths (configs/rbac.json, configs/rbac.yaml, configs/rbac.yml) // Uses rbac.DefaultConfigPath internally (empty string triggers default path resolution) // Tries configs/rbac.json, then configs/rbac.yaml, then configs/rbac.yml app.EnableRBAC() // Or with custom config path app.EnableRBAC("configs/custom-rbac.json") app.GET("/api/users", handler) app.Run() } ``` **Configuration** (`configs/rbac.json`): ```json { "roleHeader": "X-User-Role", "roles": [ { "name": "admin", "permissions": ["users:read", "users:write", "users:delete", "posts:read", "posts:write"] }, { "name": "editor", "permissions": ["users:write", "posts:write"], "inheritsFrom": ["viewer"] }, { "name": "viewer", "permissions": ["users:read", "posts:read"] } ], "endpoints": [ { "path": "/health", "methods": ["GET"], "public": true }, { "path": "/api/users", "methods": ["GET"], "requiredPermissions": ["users:read"] }, { "path": "/api/users", "methods": ["POST"], "requiredPermissions": ["users:write"] } ] } ``` > **💡 Best Practice**: For production/public APIs, use JWT-based RBAC instead of header-based RBAC for better security. ## Configuration ### Role Extraction **Header-Based** (for internal/trusted networks): ```json { "roleHeader": "X-User-Role" } ``` **JWT-Based** (for production/public APIs): ```json { "jwtClaimPath": "role" // or "roles[0]", "permissions.role", etc. } ``` **Precedence**: If both are set, **only JWT is considered**. The header is not checked when `jwtClaimPath` is configured, even if JWT extraction fails. **JWT Claim Path Formats**: - `"role"` → `{"role": "admin"}` - `"roles[0]"` → `{"roles": ["admin", "user"]}` (first element) - `"permissions.role"` → `{"permissions": {"role": "admin"}}` ### Roles and Permissions ```json { "roles": [ { "name": "admin", "permissions": ["users:read", "users:write", "users:delete", "posts:read", "posts:write"] // Explicit permissions (wildcards not supported) }, { "name": "editor", "permissions": ["users:write", "posts:write"], // Only additional permissions "inheritsFrom": ["viewer"] // Inherits viewer's permissions }, { "name": "viewer", "permissions": ["users:read", "posts:read"] } ] } ``` **Note**: When using `inheritsFrom`, only specify additional permissions - inherited ones are automatically included. ### Endpoint Mapping ```json { "endpoints": [ { "path": "/health", "methods": ["GET"], "public": true // Bypasses authorization }, { "path": "/api/users", "methods": ["GET"], "requiredPermissions": ["users:read"] }, { "path": "/api/users/{id:[0-9]+}", // Mux pattern with constraint (numeric IDs only) "methods": ["DELETE"], "requiredPermissions": ["users:delete"] }, { "path": "/api/{resource}", // Single-level pattern - matches /api/users, /api/posts "methods": ["GET"], "requiredPermissions": ["api:read"] }, { "path": "/api/{path:.*}", // Multi-level pattern - matches /api/users/123, /api/posts/comments "methods": ["*"], // All methods "requiredPermissions": ["admin:read", "admin:write"] // Multiple permissions (OR logic) }, { "path": "/api/{category}/posts", // Middle variable - matches /api/tech/posts, /api/news/posts "methods": ["GET"], "requiredPermissions": ["posts:read"] } ] } ``` ### Mux Pattern Syntax RBAC uses **gorilla/mux route pattern conventions** for endpoint matching. This ensures perfect alignment with how routes are registered in GoFr. **Important**: The RBAC middleware uses the same router configuration as GoFr's application router (`StrictSlash(false)`), ensuring consistent behavior for trailing slashes. This means `/api/users` and `/api/users/` are treated as the same route in both RBAC authorization checks and actual route matching. **Pattern Types**: - **Exact**: `"/api/users"` matches exactly `/api/users` - **Single Variable**: `"/api/users/{id}"` matches `/api/users/123`, `/api/users/abc` (any single segment) - **Variable with Constraint**: `"/api/users/{id:[0-9]+}"` matches `/api/users/123` (numeric IDs only) - **Single-Level Pattern**: `"/api/{resource}"` matches `/api/users`, `/api/posts` (one segment) - **Multi-Level Pattern**: `"/api/{path:.*}"` matches `/api/users/123`, `/api/posts/comments` (any depth) - **Middle Variable**: `"/api/{category}/posts"` matches `/api/tech/posts`, `/api/news/posts` **Common Patterns**: - Numeric IDs: `"/api/users/{id:[0-9]+}"` (matches `/api/users/123`) - UUIDs: `"/api/users/{uuid:[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}}"` (matches `/api/users/550e8400-e29b-41d4-a716-446655440000`) - Alphanumeric: `"/api/users/{name:[a-zA-Z0-9]+}"` (matches `/api/users/user123`) **Grouped Endpoints**: For endpoints that need to match multiple paths, use mux patterns: - **Single-level wildcard**: Use `"/api/{resource}"` instead of `"/api/*"` - Matches: `/api/users`, `/api/posts` (one segment) - **Multi-level wildcard**: Use `"/api/{path:.*}"` instead of `"/api/*"` - Matches: `/api/users/123`, `/api/posts/comments` (any depth) - **Middle variable**: Use `"/api/{category}/posts"` instead of `"/api/*/posts"` - Matches: `/api/tech/posts`, `/api/news/posts` ## JWT-Based RBAC For production/public APIs, use JWT-based role extraction: ```go app := gofr.New() // Enable OAuth middleware first (required for JWT validation) app.EnableOAuth("https://auth.example.com/.well-known/jwks.json", 10) // Enable RBAC with config path (or use app.EnableRBAC() for default paths using rbac.DefaultConfigPath) app.EnableRBAC("configs/rbac.json") ``` **Configuration** (`configs/rbac.json`): ```json { "jwtClaimPath": "role", // or "roles[0]", "permissions.role", etc. "roles": [...], "endpoints": [...] } ``` ## Accessing Role in Handlers For business logic, you can access the user's role from the request context: **JWT-Based RBAC** (when using JWT role extraction): ```go import ( "encoding/json" "gofr.dev/pkg/gofr" "gofr.dev/pkg/gofr/http" ) // JWTClaims represents the JWT claims structure type JWTClaims struct { Role string `json:"role"` Sub string `json:"sub"` // Add other claim fields as needed } func handler(ctx *gofr.Context) (interface{}, error) { // Get JWT claims from context claimsMap := ctx.GetAuthInfo().GetClaims() if claimsMap == nil { return nil, http.ErrorInvalidParam{Params: []string{"authorization"}} } // Convert map claims to struct (recommended GoFr pattern) var claims JWTClaims claimsBytes, err := json.Marshal(claimsMap) if err != nil { return nil, http.ErrorInvalidParam{Params: []string{"claims"}} } if err := json.Unmarshal(claimsBytes, &claims); err != nil { return nil, http.ErrorInvalidParam{Params: []string{"claims"}} } // Use role for business logic (e.g., personalize UI, filter data) // The role field matches the jwtClaimPath configured in rbac.json return map[string]string{"userRole": claims.Role}, nil } ``` **Note**: All authorization is handled automatically by the middleware. Accessing the role in handlers is only for business logic purposes (e.g., personalizing UI, filtering data). ## Permission Naming Conventions ### Recommended Format Use the format: `resource:action` - **Resource**: The entity being accessed (e.g., `users`, `posts`, `orders`) - **Action**: The operation being performed (e.g., `read`, `write`, `delete`, `update`) ### Examples: ```editorconfig "users:read" // Read users "users:write" // Create/update users "users:delete" // Delete users "posts:read" // Read posts "posts:write" // Create/update posts "orders:approve" // Approve orders "reports:export" // Export reports ``` **Avoid inconsistent formats**: - ❌ `"read_users"`, `"writeUsers"`, `"DELETE_POSTS"` - ✅ `"users:read"`, `"users:write"`, `"posts:delete"` ### Wildcards Not Supported **Important**: Wildcards are **NOT supported** in permissions. Only exact matches are allowed. - ❌ `"*:*"` - Does not match all permissions - ❌ `"users:*"` - Does not match all user permissions - ✅ `"users:read"` - Exact match only - ✅ `"users:write"` - Exact match only If you need multiple permissions, specify them explicitly: ```json { "name": "admin", "permissions": ["users:read", "users:write", "users:delete", "posts:read", "posts:write"] } ``` Or use role inheritance to avoid duplication: ```json { "name": "editor", "permissions": ["users:write", "posts:write"], "inheritsFrom": ["viewer"] // Inherits viewer's permissions } ``` ## Common Patterns ### CRUD Permissions ```json { "roles": [ { "name": "admin", "permissions": ["users:delete"], "inheritsFrom": ["editor"] }, { "name": "editor", "permissions": ["users:create", "users:update"], "inheritsFrom": ["viewer"] }, { "name": "viewer", "permissions": ["users:read"] } ], "endpoints": [ { "path": "/api/users", "methods": ["POST"], "requiredPermissions": ["users:create"] }, { "path": "/api/users", "methods": ["GET"], "requiredPermissions": ["users:read"] }, { "path": "/api/users/{id:[0-9]+}", "methods": ["PUT", "PATCH"], "requiredPermissions": ["users:update"] }, { "path": "/api/users/{id:[0-9]+}", "methods": ["DELETE"], "requiredPermissions": ["users:delete"] } ] } ``` ### Resource-Specific Permissions ```json { "roles": [ { "name": "admin", "permissions": ["own:posts:read", "own:posts:write", "all:posts:read", "all:posts:write"] }, { "name": "author", "permissions": ["own:posts:read", "own:posts:write"] }, { "name": "viewer", "permissions": ["own:posts:read", "all:posts:read"] } ], "endpoints": [ { "path": "/api/posts/my-posts", "methods": ["GET"], "requiredPermissions": ["own:posts:read"] }, { "path": "/api/posts", "methods": ["GET"], "requiredPermissions": ["all:posts:read"] } ] } ``` ## Best Practices ### Security - **Never use header-based RBAC for public APIs** - Use JWT-based RBAC - **Always validate JWT tokens** - Use proper JWKS endpoints with HTTPS - **Use HTTPS in production** - Protect tokens and headers - **Monitor logs** - Track authorization decisions ### Configuration - **Use role inheritance** - Avoid duplicating permissions (only specify additional ones) - **Use consistent naming** - Follow `resource:action` format (e.g., `users:read`, `posts:write`) - **Group related permissions** - Organize by resource type - **Version control configs** - Track RBAC changes in git ## Troubleshooting **Role not being extracted** - Ensure `roleHeader` or `jwtClaimPath` is set in config file - For header-based: check that the header is present in requests - For JWT-based: ensure OAuth middleware is enabled before RBAC **Permission checks failing** - Verify `roles[].permissions` is properly configured - Check that `endpoints[].requiredPermissions` matches your routes correctly - Ensure role has the required permission (check inherited permissions too) - Verify route pattern matches exactly (mux patterns supported) - Check role inheritance - ensure inherited permissions are included **Permission always denied** - Check role assignment - verify user's role has the required permission - Review role permissions - ensure `roles[].permissions` includes the required permission - Enable debug logging - check debug logs for authorization decisions **Permission always allowed** - Check if endpoint is in RBAC config - routes not in config are allowed to proceed - Check public endpoints - verify endpoint is not marked as `public: true` - Review endpoint configuration - ensure `endpoints[].requiredPermissions` is set correctly - Verify permission check - check logs to see if permission check is being performed **JWT role extraction failing** - Ensure OAuth middleware is enabled before RBAC - Verify JWT claim path is correct **Config file not found** - Ensure config file exists at the specified path - Or use default paths (`configs/rbac.json`, `configs/rbac.yaml`, `configs/rbac.yml`) **Route not being protected by RBAC** - Verify the route is explicitly configured in `endpoints[]` array - Check that the path pattern matches exactly (case-sensitive) - Ensure HTTP method matches (or use `["*"]` for all methods) - Remember: Routes not in RBAC config are allowed to proceed (not blocked) ## How It Works 1. **Role Extraction**: Extracts user role from header (`X-User-Role`) or JWT claims 2. **Endpoint Matching**: Matches request method + path to endpoint configuration 3. **Permission Check**: Verifies role has required permission for the endpoint 4. **Authorization**: Allows or denies request based on permission check The middleware automatically handles all authorization - you just define routes normally. ### Unmatched Routes Behavior **Important**: RBAC only enforces authorization for endpoints that are **explicitly configured** in the RBAC config file. - ✅ **Routes in RBAC config**: Authorization is enforced (requires valid role and permissions) - ✅ **Routes NOT in RBAC config**: Requests are allowed to proceed to normal route matching - If the route exists in your application, it will be handled normally - If the route doesn't exist, it will return 404 (route not registered) **Example**: ```json { "endpoints": [ { "path": "/api/users", "methods": ["GET"], "requiredPermissions": ["users:read"] } ] } ``` In this configuration: - `GET /api/users` → **RBAC enforced** (requires `users:read` permission) - `POST /api/users` → **Not in RBAC config** → Allowed to proceed (may return 404 if route doesn't exist) - `GET /api/posts` → **Not in RBAC config** → Allowed to proceed (may return 404 if route doesn't exist) - `GET /health` → **Not in RBAC config** → Allowed to proceed (will work if route exists) This design allows you to: - Gradually add RBAC protection to specific endpoints - Keep some routes unprotected (not in RBAC config) - Let the router handle 404s for non-existent routes ## Security and Privacy ### Telemetry Data Protection RBAC middleware implements industry-standard security practices to protect sensitive data: **Traces (OpenTelemetry):** - ✅ HTTP method and route patterns included - ✅ Authorization status (allowed/denied) included - ❌ Roles excluded (privacy protection - roles are PII) - ❌ Error messages sanitized (prevent information leakage) **Metrics:** - ✅ Authorization decision counts included - ✅ Status (allowed/denied) included - ❌ Roles excluded (avoid high cardinality and PII concerns) **Logs:** - ✅ Roles included (required for compliance: SOC 2, PCI-DSS, NIST) - ✅ HTTP method, route, status, and reason included - ❌ No authorization tokens, headers, or request bodies logged - ❌ No user IDs or personal information logged ### What's Never Logged RBAC middleware never logs: - Authorization tokens (Bearer tokens, API keys) - Request bodies or headers - User IDs or personal information - IP addresses in traces/metrics - Detailed error messages exposing internal details ## Related Documentation - [Authentication](https://gofr.dev/docs/advanced-guide/authentication) - Basic Auth, API Keys, OAuth 2.0 - [HTTP Communication](https://gofr.dev/docs/advanced-guide/http-communication) - Inter-service HTTP calls - [Middlewares](https://gofr.dev/docs/advanced-guide/middlewares) - Custom middleware implementation ## Related production guides - **Auth in Kubernetes**: [Run RBAC-protected services on Kubernetes](/docs/guides/auth-in-kubernetes) — JWT verification keys and OIDC issuer wiring in cluster. --- ## https://gofr.dev/docs/advanced-guide/remote-log-level-change # Remote Log Level Change GoFr makes it easy to adjust the details captured in the application's logs, even while it's running! This feature allows users to effortlessly fine-tune logging levels without the need for redeployment, enhancing the monitoring and debugging experience. It is facilitated through simple configuration settings. ## How it helps? - **Effortless Adjustments:** Modify the log level anytime without restarting the application. This is especially helpful during troubleshooting. - **Enhanced Visibility:** Easily switch to a more detailed log level (e.g., `DEBUG`) to gain deeper insights into specific issues, and then switch back to a less detailed level (e.g., `INFO`) for regular operation. - **Improved Performance:** Generating a large number of logs can overwhelm the logging system, leading to increased I/O operations and resource consumption, changing to Warn or Error Level reduces the number of logs, and enhancing performance. ## Configuration To enable remote log level update, users need to specify the following configuration parameter: ```dotenv REMOTE_LOG_URL= (e.g., https://log-service.com/log-levels) REMOTE_LOG_FETCH_INTERVAL= (default: 15) ``` - **REMOTE_LOG_URL:** Specifies the URL of the remote log level endpoint. - **REMOTE_LOG_FETCH_INTERVAL:** Defines the time interval (in seconds) at which GoFr fetches log level configurations from the endpoint. > [!NOTE] > If not provided the default interval between the request to fetch log level is **15 seconds**. ## Remote Log Level Endpoint The remote log level endpoint should return a JSON response in the following format: ```json { "data": { "serviceName": "test-service", "logLevel": "DEBUG" } } ``` - **serviceName:** Identifies the service for which log levels are configured. - **logLevel:** The new log level user want to set for the specified service. GoFr parses this response and adjusts log levels based on the provided configurations. ## Related production guides - **Production Logging**: [Levels, sampling, redaction, and shipping](/docs/guides/production-logging) — operate logs in production alongside remote level changes. --- ## https://gofr.dev/docs/advanced-guide/serving-static-files # Serving Static Files using GoFr Often, we are required to serve static content such as a default profile image, a favicon, or a background image for our web application. We want to have a mechanism to serve that static content without the hassle of implementing it from scratch. GoFr provides a default mechanism where if a `static` folder is available in the directory of the application, it automatically provides an endpoint with `/static/`, here filename refers to the file we want to get static content to be served. Example project structure: ```dotenv project_folder | |---configs | .env |---static | img1.jpeg | img2.png | img3.jpeg | main.go | main_test.go ``` main.go code: ```go package main import "gofr.dev/pkg/gofr" func main() { app := gofr.New() app.Run() } ``` Additionally, if we want to serve more static endpoints, we have a dedicated function called `AddStaticFiles()` which takes 2 parameters `endpoint` and the `filepath` of the static folder which we want to serve. If the folder contains a `404.html` file, GoFr automatically serves it for any missing URL, redirecting all "Not Found" requests to this page. Example project structure: ```dotenv project_folder | |---configs | .env |---static | img1.jpeg | img2.png | img3.jpeg |---public | |---css | | main.css | |---js | | main.js | | index.html | | 404.html | main.go | main_test.go ``` main.go file: ```go package main import "gofr.dev/pkg/gofr" func main() { app := gofr.New() app.AddStaticFiles("public", "./public") app.Run() } ``` In the above example, both endpoints `/public` and `/static` are available for the app to render the static content. --- ## https://gofr.dev/docs/advanced-guide/setting-custom-response-headers # Custom Response Headers and Metadata in GoFr GoFr simplifies the process of adding custom HTTP response headers and metadata to API responses using the `Response` struct. This feature allows you to include additional information such as custom headers or metadata to enhance client-server communication while keeping your data payload clean and structured. ## Features 1. **Custom Headers**: Add key-value pairs for headers, useful for: - Security policies - Debugging information - Versioning details **Type**: `map[string]string` - Keys and values must be strings. 2. **Metadata**: Include optional contextual information like: - Deployment environment - Request-specific details (e.g., timestamps, tracing IDs) **Type**: `map[string]any` - Keys must be strings, and values can be of any type. When metadata is included, the response structure is: ```json { "data": {}, "metadata": {} } ``` If metadata is omitted, the response defaults to: ```json { "data": {} } ``` ### Example Usage #### Adding Custom Headers and Metadata To include custom headers and metadata in your response, populate the Headers and MetaData fields of the Response struct in your handler function. ```go package main import ( "time" "gofr.dev/pkg/gofr" "gofr.dev/pkg/gofr/http/response" ) func main() { app := gofr.New() app.GET("/hello", HelloHandler) app.Run() } func HelloHandler(c *gofr.Context) (any, error) { name := c.Param("name") if name == "" { c.Log("Name parameter is empty, defaulting to 'World'") name = "World" } // Define custom headers (map[string]string) headers := map[string]string{ "X-Custom-Header": "CustomValue", "X-Another-Header": "AnotherValue", } // Define metadata (map[string]any) metaData := map[string]any{ "environment": "staging", "timestamp": time.Now(), } // Return response with custom headers and metadata return response.Response{ Data: map[string]string{"message": "Hello, " + name + "!"}, Metadata: metaData, Headers: headers, }, nil } ``` ### Example Responses #### Response with Metadata: When metadata is included, the response contains the metadata field: ```json { "data": { "message": "Hello, World!" }, "metadata": { "environment": "staging", "timestamp": "2024-12-23T12:34:56Z" } } ``` #### Response without Metadata: If no metadata is provided, the response only includes the data field: ```json { "data": { "message": "Hello, World!" } } ``` This functionality offers a convenient, structured way to include additional response information without altering the core data payload. --- ## https://gofr.dev/docs/advanced-guide/startup-hooks # Startup Hooks GoFr provides a way to run synchronous jobs when your application starts, before any servers begin handling requests. This is useful for tasks like seeding a database, warming up a cache, or performing other critical setup procedures. ## OnStart You can register a startup hook using the `a.OnStart()` method on your `app` instance. ## Usage The method accepts a function with the signature: The method accepts a function with the signature `func(ctx *gofr.Context) error`. - The `*gofr.Context` passed to the hook is fully initialized and provides access to all dependency-injection-managed services (e.g., `ctx.Container.SQL`, `ctx.Container.Redis`). - If any `OnStart` hook returns an error, the application will log the error and refuse to start. ### Example: Warming up a Cache Here is an example of using `OnStart` to set an initial value in a Redis cache when the application starts. ```go package main import ( "gofr.dev/pkg/gofr" ) func main() { a := gofr.New() // Register an OnStart hook to warm up a cache. a.OnStart(func(ctx *gofr.Context) error { ctx.Logger.Info("Warming up the cache...") // In a real app, this might come from a database or another service. cacheKey := "initial-data" cacheValue := "This is some data cached at startup." err := ctx.Redis.Set(ctx, cacheKey, cacheValue, 0).Err() if err != nil { ctx.Logger.Errorf("Failed to warm up cache: %v", err) return err // Return the error to halt startup if caching fails. } ctx.Logger.Info("Cache warmed up successfully!") return nil }) // ... register your routes a.Run() } ``` This ensures that critical startup tasks are completed successfully before the application begins accepting traffic. ## Related production guides - **Graceful Shutdown**: [Mirror your startup work on the way down](/docs/guides/graceful-shutdown) — close pools, drain queues, and finish in-flight requests cleanly. --- ## https://gofr.dev/docs/advanced-guide/swagger-documentation # Rendering OpenAPI Documentation in GoFr GoFr supports automatic rendering of OpenAPI (also known as Swagger) documentation. This feature allows you to easily provide interactive API documentation for your users. ## What is OpenAPI/Swagger Documentation? OpenAPI, also known as Swagger, is a specification for building APIs. An OpenAPI file allows you to describe your entire API, including: - Available endpoints (/users) and operations on each endpoint (GET /users, DELETE /users/{id}) - Operation parameters, input, and output for each operation - Authentication methods - Contact information, license, terms of use, and other information. API specifications can be written in YAML or JSON. The format is easy to learn and readable to both humans and machines. The complete OpenAPI Specification can be found on the official [Swagger website](https://swagger.io/). ## Enabling GoFr to render your openapi.json file To allow GoFr to render your OpenAPI documentation, simply place your `openapi.json` file inside the `static` directory of your project. GoFr will automatically render the Swagger documentation at the `/.well-known/swagger` endpoint. Here are the steps: - Create an `openapi.json` file that describes your API according to the OpenAPI specification. - Place the `openapi.json` file inside the `static` directory in your project. - Start your GoFr server. - Navigate to `/.well-known/swagger` on your server’s URL. You should now see a beautifully rendered, interactive documentation for your API that users can use to understand and interact with your API. --- ## https://gofr.dev/docs/advanced-guide/using-cron # Cron job scheduling Cron is a task scheduler that allows user to automate commands or scripts to run at specific times, dates, or intervals. This makes cron a powerful tool for system administrators and developers who want to automate repetitive tasks. What can users automate with cron? - **System maintenance**: Cron can be used to schedule regular backups, update software packages, or clean up temporary files. - **Data processing**: Users can use cron to download data from the internet at specific times, process it, and generate reports. - **Sending notifications**: Cron can be used to trigger emails or other notifications based on events or system logs. Basically, any task that can be expressed as a command or script can be automated with cron. Writing a cron job! On Linux like systems cron jobs can be added by adding a line to the crontab file, specifying the schedule and the command that needs to be run at that schedule. The cron schedule is expressed in the following format: `minute hour day_of_month month day_of_week` GoFr also allows an optional field for `second` as first part in the schedule format, like in the following format: `second minute hour day_of_month month day_of_week` Each field can take a specific value or combination of values to define the schedule. Users can use special characters like `*` (asterisk) to represent **any** value and `,` (comma) to separate multiple values. It also supports `0-n` to define a range of values for which the cron should run and `*/n` to define number of times the cron should run. Here n is an integer. ## Adding cron jobs in GoFr applications Adding cron jobs to GoFr applications is made easy with a simple injection of user's function to the cron table maintained by the GoFr. The minimum time difference between cron job's two consecutive runs is a minute as it is the least significant scheduling time parameter. Cron job with generic format: ```go app.AddCronJob("* * * * *", "job-name", func(ctx *gofr.Context) { // the cron job that needs to be executed at every minute }) ``` Cron job with optional second in format: ```go app.AddCronJob("* * * * * *", "job-name", func(ctx *gofr.Context) { // the cron job that needs to be executed at every second }) ``` The `AddCronJob` methods takes three arguments—a cron schedule, the cron job name(for tracing) and the set of statements that are to be executed at the given schedule. ### Example ```go package main import ( "time" "gofr.dev/pkg/gofr" ) func main() { app := gofr.New() // Run the cron job every 5 hours(*/5) app.AddCronJob("* */5 * * *", "", func(ctx *gofr.Context) { ctx.Logger.Infof("current time is %v", time.Now()) }) // Run the cron job every 10 seconds(*/10) app.AddCronJob("*/10 * * * * *", "", func(ctx *gofr.Context) { ctx.Logger.Infof("current time is %v", time.Now()) }) app.Run() } ``` ### Cron job metrics GoFr automatically collects metrics for all registered cron jobs. These metrics are available on the `/metrics` endpoint (default port 2121) and include: - `app_cron_job_total`: Total number of times a cron job has been triggered. - `app_cron_job_success`: Number of successful executions. - `app_cron_job_failures`: Number of failed executions (including panics). - `app_cron_job_duration`: Duration of execution in **seconds**. Each metric is labeled with the `job` (user-defined name) to allow fine-grained filtering and monitoring. > #### Check out the example on how to add cron jobs in GoFr: [Visit GitHub](https://github.com/gofr-dev/gofr/blob/main/examples/using-cron-jobs/main.go) --- ## https://gofr.dev/docs/advanced-guide/using-publisher-subscriber # Publisher Subscriber Publisher Subscriber is an architectural design pattern for asynchronous communication between different entities. These could be different applications or different instances of the same application. Thus, the movement of messages between the components is made possible without the components being aware of each other's identities, meaning the components are decoupled. This makes the application/system more flexible and scalable as each component can be scaled and maintained according to its own requirement. ## Design choice In GoFr application if a user wants to use the Publisher-Subscriber design, it supports several message brokers, including Apache Kafka, Google PubSub, MQTT, NATS JetStream, Redis Pub/Sub, Azure Event Hubs, and Amazon SQS. The initialization of the PubSub is done in an IoC container which handles the PubSub client dependency. With this, the control lies with the framework and thus promotes modularity, testability, and re-usability. Users can do publish and subscribe to multiple topics in a single application, by providing the topic name. Users can access the methods of the container to get the Publisher and Subscriber interface to perform subscription to get a single message or publish a message on the message broker. > Container is part of the GoFr Context ## Configuration and Setup Some of the configurations that are required to configure the PubSub backend that an application is to use that are specific for the type of message broker user wants to use. `PUBSUB_BACKEND` defines which message broker the application needs to use. ### Kafka #### Configs {% table %} - Name - Description - Required - Default - Example - Valid format --- - `PUBSUB_BACKEND` - Using Apache Kafka as message broker. - `+` - - `KAFKA` - Not empty string --- - `PUBSUB_BROKER` - Address to connect to kafka broker. Multiple brokers can be added as comma separated values. - `+` - - `localhost:9092` or `localhost:8087,localhost:8088,localhost:8089` - Not empty string --- - `CONSUMER_ID` - Consumer group id to uniquely identify the consumer group. - if consuming - - `order-consumer` - Not empty string --- - `PUBSUB_OFFSET` - Determines from whence the consumer group should begin consuming when it finds a partition without a committed offset. - `-` - `-1` - `10` - int --- - `KAFKA_BATCH_SIZE` - Limit on how many messages will be buffered before being sent to a partition. - `-` - `100` - `10` - Positive int --- - `KAFKA_BATCH_BYTES` - Limit the maximum size of a request in bytes before being sent to a partition. - `-` - `1048576` - `65536` - Positive int --- - `KAFKA_BATCH_TIMEOUT` - Time limit on how often incomplete message batches will be flushed to Kafka (in milliseconds). - `-` - `1000` - `300` - Positive int --- - `KAFKA_SECURITY_PROTOCOL` - Security protocol used to communicate with Kafka (e.g., PLAINTEXT, SSL, SASL_PLAINTEXT, SASL_SSL). - `-` - `PLAINTEXT` - `SASL_SSL` - String --- - `KAFKA_SASL_MECHANISM` - SASL mechanism for authentication (e.g., PLAIN, SCRAM-SHA-256, SCRAM-SHA-512). - `-` - `""` - `PLAIN` - String --- - `KAFKA_SASL_USERNAME` - Username for SASL authentication. - `-` - `""` - `user` - String --- - `KAFKA_SASL_PASSWORD` - Password for SASL authentication. - `-` - `""` - `password` - String --- - `KAFKA_TLS_CERT_FILE` - Path to the TLS certificate file. - `-` - `""` - `/path/to/cert.pem` - Path --- - `KAFKA_TLS_KEY_FILE` - Path to the TLS key file. - `-` - `""` - `/path/to/key.pem` - Path --- - `KAFKA_TLS_CA_CERT_FILE` - Path to the TLS CA certificate file. - `-` - `""` - `/path/to/ca.pem` - Path --- - `KAFKA_TLS_INSECURE_SKIP_VERIFY` - Skip TLS certificate verification. - `-` - `false` - `true` - Boolean {% /table %} ```dotenv PUBSUB_BACKEND=KAFKA# using apache kafka as message broker PUBSUB_BROKER=localhost:9092 CONSUMER_ID=order-consumer KAFKA_BATCH_SIZE=1000 KAFKA_BATCH_BYTES=1048576 KAFKA_BATCH_TIMEOUT=300 KAFKA_SASL_MECHANISM=PLAIN KAFKA_SASL_USERNAME=user KAFKA_SASL_PASSWORD=password KAFKA_TLS_CERT_FILE=/path/to/cert.pem KAFKA_TLS_KEY_FILE=/path/to/key.pem KAFKA_TLS_CA_CERT_FILE=/path/to/ca.pem KAFKA_TLS_INSECURE_SKIP_VERIFY=true ``` #### Docker setup ```shell docker run --name kafka-1 -p 9092:9092 \ -e KAFKA_ENABLE_KRAFT=yes \ -e KAFKA_CFG_PROCESS_ROLES=broker,controller \ -e KAFKA_CFG_CONTROLLER_LISTENER_NAMES=CONTROLLER \ -e KAFKA_CFG_LISTENERS=PLAINTEXT://:9092,CONTROLLER://:9093 \ -e KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT \ -e KAFKA_CFG_ADVERTISED_LISTENERS=PLAINTEXT://127.0.0.1:9092 \ -e KAFKA_CFG_AUTO_CREATE_TOPICS_ENABLE=true \ -e KAFKA_BROKER_ID=1 \ -e KAFKA_CFG_CONTROLLER_QUORUM_VOTERS=1@127.0.0.1:9093 \ -e ALLOW_PLAINTEXT_LISTENER=yes \ -e KAFKA_CFG_NODE_ID=1 \ -v kafka_data:/bitnami \ bitnami/kafka:3.4 ``` ### GOOGLE #### Configs ```dotenv PUBSUB_BACKEND=GOOGLE // using Google PubSub as message broker GOOGLE_PROJECT_ID=project-order // google projectId where the PubSub is configured GOOGLE_SUBSCRIPTION_NAME=order-consumer // unique subscription name to identify the subscribing entity ``` #### Docker setup ```shell docker pull gcr.io/google.com/cloudsdktool/google-cloud-cli:emulators docker run --name=gcloud-emulator -d -p 8086:8086 \ gcr.io/google.com/cloudsdktool/google-cloud-cli:emulators gcloud beta emulators pubsub start --project=test123 \ --host-port=0.0.0.0:8086 ``` > **Note**: To set GOOGLE_APPLICATION_CREDENTIAL - refer {% new-tab-link title="here" href="https://cloud.google.com/docs/authentication/application-default-credentials" /%} > **Note**: In Google PubSub only one subscription name can access one topic, framework appends the topic name and subscription name to form the > unique subscription name on the Google client. ### MQTT #### Configs ```dotenv PUBSUB_BACKEND=MQTT // using MQTT as pubsub MQTT_HOST=localhost // broker host URL MQTT_PORT=1883 // broker port MQTT_CLIENT_ID_SUFFIX=test // suffix to a random generated client-id(uuid v4) #some additional configs(optional) MQTT_PROTOCOL=tcp // protocol for connecting to broker can be tcp, tls, ws or wss MQTT_MESSAGE_ORDER=true // config to maintain/retain message publish order, by default this is false MQTT_USER=username // authentication username MQTT_PASSWORD=password // authentication password ``` > **Note** : If `MQTT_HOST` config is not provided, the application will connect to a public broker > {% new-tab-link title="EMQX Broker" href="https://www.emqx.com/en/mqtt/public-mqtt5-broker" /%} #### Docker setup ```shell docker run -d \ --name mqtt \ -p 8883:8883 \ -v \ eclipse-mosquitto:latest /mosquitto.conf:/mosquitto/config/mosquitto.conf ``` > **Note**: find the default mosquitto config file {% new-tab-link title="here" href="https://github.com/eclipse/mosquitto/blob/master/mosquitto.conf" /%} ### NATS JetStream NATS JetStream is supported as an external PubSub provider, meaning if you're not using it, it won't be added to your binary. **References** https://docs.nats.io/ https://docs.nats.io/nats-concepts/jetstream https://docs.nats.io/using-nats/developer/connecting/creds #### Configs ```dotenv PUBSUB_BACKEND=NATS PUBSUB_BROKER=nats://localhost:4222 NATS_STREAM=mystream NATS_SUBJECTS=orders.*,shipments.* NATS_MAX_WAIT=5s NATS_MAX_PULL_WAIT=500ms NATS_CONSUMER=my-consumer NATS_CREDS_FILE=/path/to/creds.json ``` #### Setup To set up NATS JetStream, follow these steps: 1. Import the external driver for NATS JetStream: ```bash go get gofr.dev/pkg/gofr/datasource/pubsub/nats ``` 2. Use the `AddPubSub` method to add the NATS JetStream driver to your application: ```go app := gofr.New() app.AddPubSub(nats.New(nats.Config{ Server: "nats://localhost:4222", Stream: nats.StreamConfig{ Stream: "mystream", Subjects: []string{"orders.*", "shipments.*"}, }, MaxWait: 5 * time.Second, MaxPullWait: 500 * time.Millisecond, Consumer: "my-consumer", CredsFile: "/path/to/creds.json", })) ``` #### Docker setup ```shell docker run -d \ --name nats \ -p 4222:4222 \ -p 8222:8222 \ -v \ nats:2.9.16 /nats.conf:/nats/config/nats.conf ``` #### Configuration Options | Name | Description | Required | Default | Example | |------|-------------|----------|---------|---------| | `PUBSUB_BACKEND` | Set to "NATS" to use NATS JetStream as the message broker | Yes | - | `NATS` | | `PUBSUB_BROKER` | NATS server URL | Yes | - | `nats://localhost:4222` | | `NATS_STREAM` | Name of the NATS stream | Yes | - | `mystream` | | `NATS_SUBJECTS` | Comma-separated list of subjects to subscribe to | Yes | - | `orders.*,shipments.*` | | `NATS_MAX_WAIT` | Maximum wait time for batch requests | No | - | `5s` | | `NATS_MAX_PULL_WAIT` | Maximum wait time for individual pull requests | No | 0 | `500ms` | | `NATS_CONSUMER` | Name of the NATS consumer | No | - | `my-consumer` | | `NATS_CREDS_FILE` | Path to the credentials file for authentication | No | - | `/path/to/creds.json` | #### Usage When subscribing or publishing using NATS JetStream, make sure to use the appropriate subject name that matches your stream configuration. For more information on setting up and using NATS JetStream, refer to the official NATS documentation. ### Redis Pub/Sub Redis Pub/Sub is a lightweight messaging system. GoFr supports two modes: 1. **Streams Mode** (Default): Uses Redis Streams for persistent messaging with consumer groups and acknowledgments. 2. **PubSub Mode**: Standard Redis Pub/Sub (fire-and-forget, no persistence). #### Redis connection Redis Pub/Sub uses the same Redis connection configuration as the Redis datasource (`REDIS_HOST`, `REDIS_PORT`, `REDIS_DB`, TLS, etc.). See the config reference: `https://gofr.dev/docs/references/configs#redis`. #### Example `.env` ```dotenv PUBSUB_BACKEND=REDIS REDIS_HOST=localhost REDIS_PORT=6379 REDIS_USER=myuser REDIS_PASSWORD=mypassword REDIS_DB=0 REDIS_PUBSUB_DB=1 REDIS_TLS_ENABLED=true REDIS_TLS_CA_CERT=/path/to/ca.pem REDIS_TLS_CERT=/path/to/cert.pem REDIS_TLS_KEY=/path/to/key.pem # Streams mode (default) - requires consumer group REDIS_STREAMS_CONSUMER_GROUP=my-group REDIS_STREAMS_CONSUMER_NAME=my-consumer REDIS_STREAMS_BLOCK_TIMEOUT=5s REDIS_STREAMS_PEL_RATIO=0.7 # 70% PEL, 30% new messages REDIS_STREAMS_MAXLEN=1000 # To use PubSub mode instead, set: # REDIS_PUBSUB_MODE=pubsub ``` #### Docker setup ```shell docker run -d \ --name redis \ -p 6379:6379 \ redis:7-alpine ``` For Redis with password authentication: ```shell docker run -d \ --name redis \ -p 6379:6379 \ redis:7-alpine redis-server --requirepass mypassword ``` #### Redis configs The following configs apply specifically to Redis Pub/Sub behavior. For base Redis connection/TLS configs, refer to `https://gofr.dev/docs/references/configs#redis`. {% table %} - Name - Description - Default - Example --- - `PUBSUB_BACKEND` - Set to `REDIS` to use Redis as the Pub/Sub backend. - - - `REDIS` --- - `REDIS_PUBSUB_MODE` - Mode: `streams` (default, at-least-once) or `pubsub` (at-most-once) - `streams` - `pubsub` --- - `REDIS_STREAMS_CONSUMER_GROUP` - Consumer group name (required in streams mode) - - - `mygroup` --- - `REDIS_STREAMS_CONSUMER_NAME` - Consumer name (optional; auto-generated if empty) - - - `consumer-1` --- - `REDIS_STREAMS_BLOCK_TIMEOUT` - Blocking timeout for stream reads. Lower values (1s-2s) = faster detection, higher CPU. Higher values (10s-30s) = lower CPU, higher latency. - `5s` - `2s` or `30s` > **Important**: If `REDIS_STREAMS_CONSUMER_GROUP` is empty or not provided, an error will occur when attempting to subscribe. However, publishing will work correctly without it. --- - `REDIS_STREAMS_PEL_RATIO` - Ratio of PEL (pending) messages to read vs new messages (0.0-1.0). Ratio determines initial PEL allocation; all remaining capacity is always filled with new messages. - `0.7` - `0.5` or `0.8` --- - `REDIS_STREAMS_MAXLEN` - Max stream length for trimming (approximate). Set to `0` for unlimited. - `0` (unlimited) - `10000` --- - `REDIS_PUBSUB_DB` - Redis DB for Pub/Sub operations. Keep different from `REDIS_DB` when using migrations + streams mode. - `15` - `1` --- - `REDIS_PUBSUB_BUFFER_SIZE` - Message buffer size - `100` - `1000` --- - `REDIS_PUBSUB_QUERY_TIMEOUT` - Timeout for Query operations - `5s` - `30s` --- - `REDIS_PUBSUB_QUERY_LIMIT` - Message limit for Query operations - `10` - `50` {% /table %} For Redis with TLS: ```shell docker run -d \ --name redis \ -p 6379:6379 \ -v /path/to/certs:/tls \ redis:7-alpine redis-server \ --tls-port 6380 \ --port 0 \ --tls-cert-file /tls/redis.crt \ --tls-key-file /tls/redis.key \ --tls-ca-cert-file /tls/ca.crt ``` > **Note**: Topics are auto-created on first publish. When using GoFr migrations with Streams mode, keep `REDIS_DB` and `REDIS_PUBSUB_DB` separate (defaults: 0 and 15). For `REDIS_STREAMS_BLOCK_TIMEOUT`: use 1s-2s for real-time or 10s-30s for batch processing. ### Azure Event Hubs GoFr supports Event Hubs starting gofr version v1.22.0. While subscribing gofr reads from all the partitions of the consumer group provided in the configuration reducing hassle to manage them. #### Setup Azure Event Hubs is supported as an external PubSub provider such that if you are not using it, it doesn't get added in your binary. Import the external driver for `eventhub` using the following command. ```bash go get gofr.dev/pkg/gofr/datasource/pubsub/eventhub ``` Use the `AddPubSub` method of GoFr's app to connect **Example** ```go app := gofr.New() app.AddPubSub(eventhub.New(eventhub.Config{ ConnectionString: "Endpoint=sb://gofr-dev.servicebus.windows.net/;SharedAccessKeyName=RootManageSharedAccessKey;SharedAccessKey=", ContainerConnectionString: "DefaultEndpointsProtocol=https;AccountName=gofrdev;AccountKey=;EndpointSuffix=core.windows.net", StorageServiceURL: "https://gofrdev.windows.net/", StorageContainerName: "test", EventhubName: "test1", ConsumerGroup: "$Default", })) ``` While subscribing/publishing from Event Hubs make sure to keep the topic-name same as event-hub name. #### Configs 1. To set up Azure Event Hubs refer the following [documentation](https://learn.microsoft.com/en-us/azure/event-hubs/event-hubs-create). 2. As GoFr manages reading from all the partitions it needs to store the information about what has been read and what is left for that GoFr uses Azure Container which can be setup from the following [documentation](https://learn.microsoft.com/en-us/azure/storage/blobs/blob-containers-portal). ##### Mandatory Configs Configuration Map {% table %} - ConnectionString - [connection-string-primary-key](https://learn.microsoft.com/en-us/azure/event-hubs/event-hubs-get-connection-string) --- - ContainerConnectionString - [ConnectionString](https://learn.microsoft.com/en-us/azure/storage/common/storage-account-keys-manage?toc=%2Fazure%2Fstorage%2Fblobs%2Ftoc.json&bc=%2Fazure%2Fstorage%2Fblobs%2Fbreadcrumb%2Ftoc.json&tabs=azure-portal#view-account-access-keys) --- - StorageServiceURL - [Blob Service URL](https://learn.microsoft.com/en-us/azure/storage/common/storage-account-get-info?tabs=portal#get-service-endpoints-for-the-storage-account) --- - StorageContainerName - [Container Name](https://learn.microsoft.com/en-us/azure/storage/blobs/blob-containers-portal#create-a-container) --- - EventhubName - [Eventhub](https://learn.microsoft.com/en-us/azure/event-hubs/event-hubs-create#create-an-event-hub) {% /table %} ### Amazon SQS GoFr supports Amazon Simple Queue Service (SQS) as an external PubSub provider. SQS is a fully managed message queuing service that enables you to decouple and scale microservices, distributed systems, and serverless applications. #### Setup Import the external driver for `sqs` using the following command. ```bash go get gofr.dev/pkg/gofr/datasource/pubsub/sqs ``` Use the `AddPubSub` method of GoFr's app to connect. **Example** ```go package main import ( "gofr.dev/pkg/gofr" "gofr.dev/pkg/gofr/datasource/pubsub/sqs" ) func main() { app := gofr.New() app.AddPubSub(sqs.New(&sqs.Config{ Region: "us-east-1", AccessKeyID: "your-access-key-id", // optional if using IAM roles SecretAccessKey: "your-secret-access-key", // optional if using IAM roles // Endpoint: "http://localhost:4566", // optional: for LocalStack })) app.Run() } ``` > **Note**: When using IAM roles (e.g., on EC2 or ECS), you can omit `AccessKeyID` and `SecretAccessKey`. The SDK will automatically use the instance's IAM role credentials. #### Configs {% table %} - Name - Description - Required - Default - Example --- - `Region` - AWS region where the SQS queue is located. - Yes - - - `us-east-1` --- - `AccessKeyID` - AWS access key ID for authentication. - No - Uses default credential chain - `AKIAIOSFODNN7EXAMPLE` --- - `SecretAccessKey` - AWS secret access key for authentication. - No - Uses default credential chain - `wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY` --- - `SessionToken` - AWS session token for temporary credentials. - No - - - `FwoGZXIvYXdzE...` --- - `Endpoint` - Custom endpoint URL for SQS. Useful for local development with LocalStack. - No - AWS default endpoint - `http://localhost:4566` {% /table %} > **Note**: SQS queues must be created before publishing or subscribing. Use AWS CLI, AWS Console, or the `CreateTopic` method in migrations to create queues programmatically. GoFr supports Standard Queues by default—FIFO queues are not currently supported. Advanced features like Dead Letter Queues (DLQ) and Broadcast (SNS) can be configured at the infrastructure level. #### LocalStack setup (local development) [LocalStack](https://localstack.cloud/) emulates AWS services locally, making it ideal for development and testing without an AWS account. ```shell docker run -d \ --name localstack \ -p 4566:4566 \ -e SERVICES=sqs \ localstack/localstack:latest ``` After LocalStack is running, create queues using the AWS CLI: ```shell aws --endpoint-url=http://localhost:4566 --region us-east-1 \ sqs create-queue --queue-name order-logs aws --endpoint-url=http://localhost:4566 --region us-east-1 \ sqs create-queue --queue-name products ``` When using LocalStack, set the `Endpoint` field in `sqs.Config` to point at LocalStack and use dummy credentials: ```go app.AddPubSub(sqs.New(&sqs.Config{ Region: "us-east-1", Endpoint: "http://localhost:4566", AccessKeyID: "test", SecretAccessKey: "test", })) ``` ## Subscribing Adding a subscriber is similar to adding an HTTP handler, which makes it easier to develop scalable applications, as it decoupled from the Sender/Publisher. Users can define a subscriber handler and do the message processing and use `app.Subscribe` to inject the handler into the application. This is inversion of control pattern, which lets the control stay with the framework and eases the development and debugging process. The subscriber handler has the following signature. ```go func (ctx *gofr.Context) error ``` `Subscribe` method of GoFr App will continuously read a message from the configured `PUBSUB_BACKEND` which can be `KAFKA`, `GOOGLE`, `MQTT`, `NATS`, `REDIS`, or `AZURE_EVENTHUB`. For external providers like NATS JetStream, Azure Event Hubs, and Amazon SQS, use `app.AddPubSub()` instead. These can be configured in the configs folder under `.env` > The returned error determines which messages are to be committed and which ones are to be consumed again. ```go // First argument is the `topic name` followed by a handler which would process the // published messages continuously and asynchronously. app.Subscribe("order-status", func(ctx *gofr.Context)error{ // Handle the pub-sub message here }) ``` The context `ctx` provides user with the following methods: * `Bind()` - Binds the message value to a given data type. Message can be converted to `struct`, `map[string]any`, `int`, `bool`, `float64` and `string` types. * `Param(p string)/PathParam(p string)` - Returns the topic when the same is passed as param. ### Example ```go package main import ( "gofr.dev/pkg/gofr" ) func main() { app := gofr.New() app.Subscribe("order-status", func(c *gofr.Context) error { var orderStatus struct { OrderId string `json:"orderId"` Status string `json:"status"` } err := c.Bind(&orderStatus) if err != nil { c.Logger.Error(err) // returning nil here as we would like to ignore the // incompatible message and continue reading forward return nil } c.Logger.Info("Received order ", orderStatus) return nil }) app.Run() } ``` ## Publishing The publishing of message is advised to done at the point where the message is being generated. To facilitate this, user can access the publishing interface from `gofr Context(ctx)` to publish messages. ```go ctx.GetPublisher().Publish(ctx, "topic", msg) ``` Users can provide the topic to which the message is to be published. GoFr also supports multiple topic publishing. This is beneficial as applications may need to send multiple kinds of messages in multiple topics. ### Example ```go package main import ( "encoding/json" "gofr.dev/pkg/gofr" ) func main() { app := gofr.New() app.POST("/publish-order", order) app.Run() } func order(ctx *gofr.Context) (any, error) { type orderStatus struct { OrderId string `json:"orderId"` Status string `json:"status"` } var data orderStatus err := ctx.Bind(&data) if err != nil { return nil, err } msg, _ := json.Marshal(data) err = ctx.GetPublisher().Publish(ctx, "order-logs", msg) if err != nil { return nil, err } return "Published", nil } ``` > #### Check out the following examples on how to publish/subscribe to given topics: > ##### [Subscribing Topics](https://github.com/gofr-dev/gofr/blob/main/examples/using-subscriber/main.go) > ##### [Publishing Topics](https://github.com/gofr-dev/gofr/blob/main/examples/using-publisher/main.go) ## Distributed Tracing GoFr automatically traces every publish and subscribe call across Kafka, NATS JetStream, Google Pub/Sub, and Amazon SQS. **No user code is required**: as long as `TRACE_EXPORTER` is configured (see {% new-tab-link newtab=false title="Observability → Tracing" href="/docs/quick-start/observability#tracing" /%}), the framework wires everything in. ### How it works When you call `ctx.GetPublisher().Publish(ctx, topic, msg)`, GoFr: 1. Starts a span named `-publish` (for example `kafka-publish`) with `SpanKind=Producer` and attributes `messaging.system`, `messaging.destination.name`, `messaging.operation=publish`. 2. Injects the current trace context into the outgoing message using the W3C Trace Context propagator. For Kafka this rides in message headers; for Google Pub/Sub and SQS in message attributes; for NATS in message headers; and so on. When the message is delivered to a subscriber registered with `app.Subscribe(topic, handler)`, GoFr: 1. Extracts the producer's trace context from the incoming message. 2. Starts a span named `-subscribe` with `SpanKind=Consumer`, **as a child of the producer's span**. This means the consumer span shares the same `TraceID` as the publisher and lists the publisher's span as its parent. 3. Also attaches an OpenTelemetry **span link** to the producer span. The link preserves fan-out semantics — a single message may be consumed by multiple consumer groups — for tools that surface them (Jaeger, Tempo, etc.). The result is that an end-to-end flow such as `HTTP → publish → subscribe → publish → subscribe` shows up as **one connected trace** in any tracing UI, with the full waterfall visible: ```text [api-gateway ] POST /order (root) [api-gateway ] kafka-publish child of POST /order [order-service ] kafka-subscribe child of api-gateway's publish [+1 link] [order-service ] kafka-publish child of order-service's subscribe [notification-service ] kafka-subscribe child of order-service's publish [+1 link] ``` ### Sampling and scale GoFr's tracer uses `ParentBased(TraceIDRatioBased(TRACER_RATIO))` (see `pkg/gofr/otel.go`). Because the consumer span inherits the producer's sampling decision, head-based sampling via `TRACER_RATIO` is consistent across the entire chain — if the producer is sampled out, every downstream consumer span is dropped at creation as well. For high-throughput pipelines, set `TRACER_RATIO` below `1.0` (for example `0.1` for 10% sampling) to keep trace volume manageable. For very long-lived async sagas (where one trace stays open for hours), prefer tail-based sampling at the OpenTelemetry Collector tier. > [!NOTE] > Distributed tracing for pub/sub is fully transparent — the existing examples in {% new-tab-link title="examples/using-publisher" href="https://github.com/gofr-dev/gofr/tree/main/examples/using-publisher" /%} and {% new-tab-link title="examples/using-subscriber" href="https://github.com/gofr-dev/gofr/tree/main/examples/using-subscriber" /%} already produce connected traces without any tracing-specific code. --- ## https://gofr.dev/docs/advanced-guide/websocket # Websockets WebSockets provide a full-duplex communication channel over a single, long-lived connection, making them ideal for real-time applications like chat, notifications, and live updates. GoFr provides a convenient way to integrate websockets into your application. By leveraging GoFr's WebSocket support and customizable upgrader options, users can efficiently manage real-time communication in your applications. ## Usage in GoFr Here is a simple example to set up a WebSocket server in GoFr: ```go package main import ( "gofr.dev/pkg/gofr" ) func main() { app := gofr.New() app.WebSocket("/ws", WSHandler) app.Run() } func WSHandler(ctx *gofr.Context) (any, error) { var message string err := ctx.Bind(&message) if err != nil { ctx.Logger.Errorf("Error binding message: %v", err) return nil, err } ctx.Logger.Infof("Received message: %s", message) return message, nil } ``` ## Configuration Options GoFr allows us to customize the WebSocket upgrader with several options. We can set these options using the `websocket.NewWSUpgrader` function. Here is the list of options we can apply to your websocket upgrader using GoFr. - `HandshakeTimeout (WithHandshakeTimeout)`: Sets the handshake timeout. - `ReadBufferSize (WithReadBufferSize)`: Sets the size of the read buffer. - `WriteBufferSize (WithWriteBufferSize)`: Sets the size of the write buffer. - `Subprotocols (WithSubprotocols)`: Sets the supported sub-protocols. - `Error (WithError)`: Sets a custom error handler. - `CheckOrigin (WithCheckOrigin)`: Sets a custom origin check function. - `Compression (WithCompression)`: Enables compression. ## Writing Messages GoFr provides the `WriteMessageToSocket` method to send messages to the underlying websocket connection in a thread-safe way. The data parameter can be a string, []byte, or any struct that can be marshaled to JSON. ## Example: We can configure the Upgrader by creating a chain of option functions provided by GoFr. ```go package main import ( "time" "gofr.dev/pkg/gofr" "gofr.dev/pkg/gofr/websocket" ) func main() { app := gofr.New() wsUpgrader := websocket.NewWSUpgrader( websocket.WithHandshakeTimeout(5*time.Second), // Set handshake timeout websocket.WithReadBufferSize(2048), // Set read buffer size websocket.WithWriteBufferSize(2048), // Set write buffer size websocket.WithSubprotocols("chat", "binary"), // Specify subprotocols websocket.WithCompression(), // Enable compression ) app.OverrideWebsocketUpgrader(wsUpgrader) app.WebSocket("/ws", WSHandler) app.Run() } func WSHandler(ctx *gofr.Context) (any, error) { var message string err := ctx.Bind(&message) if err != nil { ctx.Logger.Errorf("Error binding message: %v", err) return nil, err } ctx.Logger.Infof("Received message: %s", message) err = ctx.WriteMessageToSocket("Hello! GoFr") if err != nil { return nil, err } return message, nil } ``` > #### Check out the example on how to read/write through a WebSocket in GoFr: [Visit GitHub](https://github.com/gofr-dev/gofr/blob/main/examples/using-web-socket/main.go) ## Inter-Service WebSocket Communication GoFr also supports Inter-Service WebSocket Communication, enabling seamless communication between services using WebSocket connections. This feature is particularly useful for microservices architectures where services need to exchange real-time data. ## Key Methods: 1. **AddWSService** This method registers a WebSocket service and establishes a persistent connection to the specified service. It also supports automatic reconnection in case of connection failures. **Parameters:** - `serviceName (string)`: A unique name for the WebSocket service. - `url (string)`: The WebSocket URL of the target service. - `headers ( map[string][]string)`: HTTP headers to include in the WebSocket handshake. - `enableReconnection (bool)`: A boolean to enable automatic reconnection. - `retryInterval (time.Duration)`: The interval between reconnection attempts. 2. **WriteMessageToService** This method sends a message to a WebSocket connection associated with a specific service. **Parameters:** - `serviceName (string)`: The name of the WebSocket service. - `data (any)`: The message to send. It can be a string, []byte, or any struct that can be marshaled to JSON. ## Usage in GoFr ```go package main import ( "time" "gofr.dev/pkg/gofr" ) func main() { app := gofr.New() // Add a WebSocket service err := app.AddWSService("notification-service", "ws://notifications.example.com/ws", nil, true, 5*time.Second) if err != nil { app.Logger().Errorf("Failed to add WebSocket service: %v", err) return } // Example route to send a message to the notification service app.POST("/send-notification", func(ctx *gofr.Context) (any, error) { message := map[string]string{ "title": "New Message", "content": "You have a new notification!", } err := ctx.WriteMessageToService("notification-service", message) if err != nil { ctx.Logger.Errorf("Failed to send message: %v", err) return nil, err } return "Notification sent successfully!", nil }) app.Run() } ``` --- # Datasources ## https://gofr.dev/docs/datasources/arangodb # ArangoDB ## Configuration To connect to `ArangoDB`, you need to provide the following environment variables: - `HOST`: The hostname or IP address of your `ArangoDB` server. - `USER`: The username for connecting to the database. - `PASSWORD`: The password for the specified user. - `PORT`: The port number ## Setup GoFr supports injecting `ArangoDB` that implements the following interface. Any driver that implements the interface can be added using the `app.AddArangoDB()` method, and users can use ArangoDB across the application with `gofr.Context`. ```go type ArangoDB interface { // CreateDB creates a new database in ArangoDB. CreateDB(ctx context.Context, database string) error // DropDB deletes an existing database in ArangoDB. DropDB(ctx context.Context, database string) error // CreateCollection creates a new collection in a database with specified type. CreateCollection(ctx context.Context, database, collection string, isEdge bool) error // DropCollection deletes an existing collection from a database. DropCollection(ctx context.Context, database, collection string) error // CreateGraph creates a new graph in a database. CreateGraph(ctx context.Context, database, graph string, edgeDefinitions any) error // DropGraph deletes an existing graph from a database. DropGraph(ctx context.Context, database, graph string) error // CreateDocument creates a new document in the specified collection. CreateDocument(ctx context.Context, dbName, collectionName string, document any) (string, error) // GetDocument retrieves a document by its ID from the specified collection. GetDocument(ctx context.Context, dbName, collectionName, documentID string, result any) error // UpdateDocument updates an existing document in the specified collection. UpdateDocument(ctx context.Context, dbName, collectionName, documentID string, document any) error // DeleteDocument deletes a document by its ID from the specified collection. DeleteDocument(ctx context.Context, dbName, collectionName, documentID string) error // GetEdges retrieves all the edge documents connected to a specific vertex in an ArangoDB graph. GetEdges(ctx context.Context, dbName, graphName, edgeCollection, vertexID string, resp any) error // Query executes an AQL query and binds the results Query(ctx context.Context, dbName string, query string, bindVars map[string]any, result any, options ...map[string]any) error HealthCheck(context.Context) (any, error) } ``` Users can easily inject a driver that supports this interface, providing usability without compromising the extensibility to use multiple databases. Import the GoFr's external driver for ArangoDB: ```shell go get gofr.dev/pkg/gofr/datasource/arangodb@latest ``` ## Example ```go package main import ( "fmt" "gofr.dev/pkg/gofr" "gofr.dev/pkg/gofr/datasource/arangodb" ) type Person struct { Name string `json:"name"` Age int `json:"age"` } func main() { app := gofr.New() // Configure the ArangoDB client arangoClient := arangodb.New(arangodb.Config{ Host: app.Config.Get("HOST"), User: app.Config.Get("USER"), Password: app.Config.Get("PASSWORD"), Port: app.Config.Get("PORT"), }) app.AddArangoDB(arangoClient) // Example routes demonstrating different types of operations app.POST("/setup", Setup) app.POST("/users/{name}", CreateUserHandler) app.POST("/friends", CreateFriendship) app.GET("/friends/{collection}/{vertexID}", GetEdgesHandler) app.Run() } // Setup demonstrates database and collection creation func Setup(ctx *gofr.Context) (any, error) { _, err := ctx.ArangoDB.CreateDocument(ctx, "social_network", "", nil) if err != nil { return nil, fmt.Errorf("failed to create database: %w", err) } if err := createCollection(ctx, "social_network", "persons"); err != nil { return nil, err } if err := createCollection(ctx, "social_network", "friendships"); err != nil { return nil, err } // Define and create the graph edgeDefs := arangodb.EdgeDefinition{ {Collection: "friendships", From: []string{"persons"}, To: []string{"persons"}}, } _, err = ctx.ArangoDB.CreateDocument(ctx, "social_network", "social_graph", edgeDefs) if err != nil { return nil, fmt.Errorf("failed to create graph: %w", err) } return "Setup completed successfully", nil } // Helper function to create collections func createCollection(ctx *gofr.Context, dbName, collectionName string) error { _, err := ctx.ArangoDB.CreateDocument(ctx, dbName, collectionName, nil) if err != nil { return fmt.Errorf("failed to create collection %s: %w", collectionName, err) } return nil } // CreateUserHandler demonstrates user management and document creation func CreateUserHandler(ctx *gofr.Context) (any, error) { name := ctx.PathParam("name") // Create a person document person := Person{ Name: name, Age: 25, } docID, err := ctx.ArangoDB.CreateDocument(ctx, "social_network", "persons", person) if err != nil { return nil, fmt.Errorf("failed to create person document: %w", err) } return map[string]string{ "message": "User created successfully", "docID": docID, }, nil } // CreateFriendship demonstrates edge document creation func CreateFriendship(ctx *gofr.Context) (any, error) { var req struct { From string `json:"from"` To string `json:"to"` StartDate string `json:"startDate"` } if err := ctx.Bind(&req); err != nil { return nil, err } edgeDocument := map[string]any{ "_from": fmt.Sprintf("persons/%s", req.From), "_to": fmt.Sprintf("persons/%s", req.To), "startDate": req.StartDate, } // Create an edge document for the friendship edgeID, err := ctx.ArangoDB.CreateDocument(ctx, "social_network", "friendships", edgeDocument) if err != nil { return nil, fmt.Errorf("failed to create friendship: %w", err) } return map[string]string{ "message": "Friendship created successfully", "edgeID": edgeID, }, nil } // GetEdgesHandler demonstrates fetching edges connected to a vertex func GetEdgesHandler(ctx *gofr.Context) (any, error) { collection := ctx.PathParam("collection") vertexID := ctx.PathParam("vertexID") fullVertexID := fmt.Sprintf("%s/%s", collection, vertexID) // Prepare a slice to hold edge details edges := make(arangodb.EdgeDetails, 0) // Fetch all edges connected to the given vertex err := ctx.ArangoDB.GetEdges(ctx, "social_network", "social_graph", "friendships", fullVertexID, &edges) if err != nil { return nil, fmt.Errorf("failed to get edges: %w", err) } return map[string]any{ "vertexID": vertexID, "edges": edges, }, nil } ``` --- ## https://gofr.dev/docs/datasources/cassandra # Cassandra GoFr supports pluggable Cassandra drivers. ## Configuration To connect to `Cassandra`, you need to provide the following environment variables: - `HOSTS`: The hostname or IP address of your Cassandra server. - `KEYSPACE`: The name of the keyspace (like a database) that holds your tables and defines replication and durability settings. - `PORT`: The port number - `USERNAME`: The username for connecting to the database. - `PASSWORD`: The password for the specified user. ## Setup GoFr defines an interface that specifies the required methods for interacting with Cassandra. Any driver implementation that adheres to this interface can be integrated into GoFr using the `app.AddCassandra()` method. This approach promotes flexibility and allows you to choose the Cassandra driver that best suits your project's needs. ```go type CassandraWithContext interface { QueryWithCtx(ctx context.Context, dest any, stmt string, values ...any) error ExecWithCtx(ctx context.Context, stmt string, values ...any) error ExecCASWithCtx(ctx context.Context, dest any, stmt string, values ...any) (bool, error) NewBatchWithCtx(ctx context.Context, name string, batchType int) error Cassandra CassandraBatchWithContext } type CassandraBatchWithContext interface { BatchQueryWithCtx(ctx context.Context, name, stmt string, values ...any) error ExecuteBatchWithCtx(ctx context.Context, name string) error ExecuteBatchCASWithCtx(ctx context.Context, name string, dest ...any) (bool, error) } ``` GoFr simplifies Cassandra integration with a well-defined interface. Users can easily implement any driver that adheres to this interface, fostering a user-friendly experience. Import the gofr's external driver for Cassandra: ```shell go get gofr.dev/pkg/gofr/datasource/cassandra@latest ``` ### Example ```go package main import ( "gofr.dev/pkg/gofr" cassandraPkg "gofr.dev/pkg/gofr/datasource/cassandra" ) type Person struct { ID int `json:"id,omitempty"` Name string `json:"name"` Age int `json:"age"` // db tag specifies the actual column name in the database State string `json:"state" db:"location"` } func main() { app := gofr.New() config := cassandraPkg.Config{ Hosts: app.Config.Get("HOSTS"), Keyspace: app.Config.Get("KEYSPACE"), Port: app.Config.Get("PORT"), Username: app.Config.Get("USERNAME"), Password: app.Config.Get("PASSWORD"), } cassandra := cassandraPkg.New(config) app.AddCassandra(cassandra) app.POST("/user", func(c *gofr.Context) (any, error) { person := Person{} err := c.Bind(&person) if err != nil { return nil, err } err = c.Cassandra.ExecWithCtx(c, `INSERT INTO persons(id, name, age, location) VALUES(?, ?, ?, ?)`, person.ID, person.Name, person.Age, person.State) if err != nil { return nil, err } return "created", nil }) app.GET("/user", func(c *gofr.Context) (any, error) { persons := make([]Person, 0) err := c.Cassandra.QueryWithCtx(c, &persons, `SELECT id, name, age, location FROM persons`) return persons, err }) app.Run() } ``` --- ## https://gofr.dev/docs/datasources/clickhouse # ClickHouse ## Configuration To connect to `ClickHouse`, you need to provide the following environment variables and use it: - `HOSTS`: The hostname or IP address of your `ClickHouse` server. - `USERNAME`: The username for connecting to the database. - `PASSWORD`: The password for the specified user. - `DATABASE`: The name of the database to connect to. ## Setup GoFr supports injecting ClickHouse that supports the following interface. Any driver that implements the interface can be added using `app.AddClickhouse()` method, and user's can use ClickHouse across application with `gofr.Context`. ```go type Clickhouse interface { Exec(ctx context.Context, query string, args ...any) error Select(ctx context.Context, dest any, query string, args ...any) error AsyncInsert(ctx context.Context, query string, wait bool, args ...any) error } ``` User's can easily inject a driver that supports this interface, this provides usability without compromising the extensibility to use multiple databases. Import the gofr's external driver for ClickHouse: ```shell go get gofr.dev/pkg/gofr/datasource/clickhouse@latest ``` ### Example ```go package main import ( "gofr.dev/pkg/gofr" "gofr.dev/pkg/gofr/datasource/clickhouse" ) type User struct { Id string `ch:"id"` Name string `ch:"name"` Age int `ch:"age"` } func main() { app := gofr.New() app.AddClickhouse(clickhouse.New(clickhouse.Config{ Hosts: app.Config.Get("HOSTS"), Username: app.Config.Get("USERNAME"), Password: app.Config.Get("PASSWORD"), Database: app.Config.Get("DATABASE"), })) app.POST("/user", Post) app.GET("/user", Get) app.Run() } func Post(ctx *gofr.Context) (any, error) { err := ctx.Clickhouse.Exec(ctx, "INSERT INTO users (id, name, age) VALUES (?, ?, ?)", "8f165e2d-feef-416c-95f6-913ce3172e15", "aryan", 10) if err != nil { return nil, err } return "successfully inserted", nil } func Get(ctx *gofr.Context) (any, error) { var user []User err := ctx.Clickhouse.Select(ctx, &user, "SELECT * FROM users") if err != nil { return nil, err } return user, nil } ``` --- ## https://gofr.dev/docs/datasources/cockroachdb # CockroachDB GoFr provides support for CockroachDB, a cloud-native SQL database that is compatible with PostgreSQL. ## Configuration To connect to CockroachDB, you need to provide the following environment variables: * `DB_DIALECT`: Set to `cockroachdb` * `DB_HOST`: The hostname or IP address of your CockroachDB server. * `DB_PORT`: The port number (default is 26257). * `DB_USER`: The username for connecting to the database. * `DB_PASSWORD`: The password for the specified user. * `DB_NAME`: The name of the database to connect to. * `DB_SSL_MODE`: SSL mode (e.g., `disable`, `require`). CockroachDB Cloud requires SSL. ## Example ```go package main import ( "context" "gofr.dev/pkg/gofr" ) func main() { // Create a new GoFr app app := gofr.New() app.GET("/user", GetUser) app.Run() } func GetUser(ctx *gofr.Context)(any, error){ // Example: Performing a simple query rows, err := ctx.SQL.QueryContext(context.Background(), "SELECT 1") if err != nil { return nil, err } defer rows.Close() return "Connection to cockroachDB Successful.", nil } ``` For more detailed examples and advanced usage, please refer to the [SQL usage guide](/docs/advanced-guide/dealing-with-sql/). --- ## https://gofr.dev/docs/datasources/couchbase # Couchbase ## Configuration To connect to `Couchbase`, you need to provide the following environment variables and use it: - `HOST`: The hostname or IP address of your Couchbase server. - `USER`: The username for connecting to the database. - `PASSWORD`: The password for the specified user. - `BUCKET`: Top level container ## Setup GoFr supports injecting `Couchbase` that implements the following interface. Any driver that implements the interface can be added using the `app.AddCouchbase()` method, and users can use Couchbase across the application with `gofr.Context`. ```go type Couchbase interface { Get(ctx context.Context, key string, result any) error Insert(ctx context.Context, key string, document, result any) error Upsert(ctx context.Context, key string, document any, result any) error Remove(ctx context.Context, key string) error Query(ctx context.Context, statement string, params map[string]any, result any) error AnalyticsQuery(ctx context.Context, statement string, params map[string]any, result any) error } ``` Users can easily inject a driver that supports this interface, providing usability without compromising the extensibility to use multiple databases. Don't forget to serup the Couchbase cluster in Couchbase Web Console first. [Follow for more details](https://docs.couchbase.com/server/current/install/getting-started-docker.html#section_jvt_zvj_42b). To begin using Couchbase in your GoFr application, you need to import the Couchbase datasource package: ```shell go get gofr.dev/pkg/gofr/datasource/couchbase@latest ``` ### Example Here is an example of how to use the Couchbase datasource in a GoFr application: ```go package main import ( "context" "fmt" "log" "gofr.dev/pkg/gofr" "gofr.dev/pkg/gofr/datasource/couchbase" ) type User struct { ID string `json:"id"` Name string `json:"name"` Age int `json:"age"` } func main() { // Create a new GoFr application app := gofr.New() // Add the Couchbase datasource to the application app.AddCouchbase(couchbase.New(&couchbase.Config{ Host: app.Config.Get("HOST"), User: app.Config.Get("USER"), Password: app.Config.Get("PASSWORD"), Bucket: app.Config.Get("BUCKET"), })) // Add the routes app.GET("/users/{id}", getUser) app.POST("/users", createUser) app.DELETE("/users/{id}", deleteUser) // Run the application app.Run() } func getUser(c *gofr.Context) (any, error) { // Get the user ID from the URL path id := c.PathParam("id") // Get the user from Couchbase var user User if err := c.Couchbase.Get(c, id, &user); err != nil { return nil, err } return user, nil } func createUser(c *gofr.Context) (any, error) { // Get the user from the request body var user User if err := c.Bind(&user); err != nil { return nil, err } // Insert the user into Couchbase if err := c.Couchbase.Insert(c, user.ID, user, nil); err != nil { return nil, err } return "user created successfully", nil } func deleteUser(c *gofr.Context) (any, error) { // Get the user ID from the URL path id := c.PathParam("id") // Remove the user from Couchbase if err := c.Couchbase.Remove(c, id); err != nil { return nil, err } return "user deleted successfully", nil } ``` --- ## https://gofr.dev/docs/datasources/dgraph # Dgraph ## Configuration To connect to `Dgraph`, you need to provide the following environment variables and use it: - `HOST`: The hostname or IP address of your Dgraph server. - `PORT`: The port number. ## Setup GoFr supports injecting Dgraph with an interface that defines the necessary methods for interacting with the Dgraph database. Any driver that implements the following interface can be added using the app.AddDgraph() method. ```go // Dgraph defines the methods for interacting with a Dgraph database. type Dgraph interface { // ApplySchema applies or updates the complete database schema. ApplySchema(ctx context.Context, schema string) error // AddOrUpdateField atomically creates or updates a single field definition. AddOrUpdateField(ctx context.Context, fieldName, fieldType, directives string) error // DropField permanently removes a field/predicate and all its associated data. DropField(ctx context.Context, fieldName string) error // Query executes a read-only query in the Dgraph database and returns the result. Query(ctx context.Context, query string) (any, error) // QueryWithVars executes a read-only query with variables in the Dgraph database. QueryWithVars(ctx context.Context, query string, vars map[string]string) (any, error) // Mutate executes a write operation (mutation) in the Dgraph database and returns the result. Mutate(ctx context.Context, mu any) (any, error) // Alter applies schema or other changes to the Dgraph database. Alter(ctx context.Context, op any) error // NewTxn creates a new transaction (read-write) for interacting with the Dgraph database. NewTxn() any // NewReadOnlyTxn creates a new read-only transaction for querying the Dgraph database. NewReadOnlyTxn() any // HealthChecker checks the health of the Dgraph instance. HealthChecker } ``` Users can easily inject a driver that supports this interface, allowing for flexibility without compromising usability. This structure supports both queries and mutations in Dgraph. Import the gofr's external driver for DGraph: ```shell go get gofr.dev/pkg/gofr/datasource/dgraph@latest ``` ### Example ```go package main import ( "encoding/json" "fmt" "github.com/dgraph-io/dgo/v210/protos/api" "gofr.dev/pkg/gofr" "gofr.dev/pkg/gofr/datasource/dgraph" ) func main() { // Create a new application app := gofr.New() db := dgraph.New(dgraph.Config{ Host: app.Config.Get("HOST"), Port: app.Config.Get("PORT"), }) // Connect to Dgraph running on localhost:9080 app.AddDgraph(db) // Add routes for Dgraph operations app.POST("/dgraph", DGraphInsertHandler) app.GET("/dgraph", DGraphQueryHandler) // Run the application app.Run() } // DGraphInsertHandler handles POST requests to insert data into Dgraph func DGraphInsertHandler(c *gofr.Context) (any, error) { // Example mutation data to insert into Dgraph mutationData := ` { "set": [ { "name": "GoFr Dev" }, { "name": "James Doe" } ] } ` // Create an api.Mutation object mutation := &api.Mutation{ SetJson: []byte(mutationData), // Set the JSON payload CommitNow: true, // Auto-commit the transaction } // Run the mutation in Dgraph response, err := c.DGraph.Mutate(c, mutation) if err != nil { return nil, err } return response, nil } // DGraphQueryHandler handles GET requests to fetch data from Dgraph func DGraphQueryHandler(c *gofr.Context) (any, error) { // A simple query to fetch all persons with a name in Dgraph response, err := c.DGraph.Query(c, "{ persons(func: has(name)) { uid name } }") if err != nil { return nil, err } // Cast response to *api.Response (the correct type returned by Dgraph Query) resp, ok := response.(*api.Response) if !ok { return nil, fmt.Errorf("unexpected response type") } // Parse the response JSON var result map[string]any err = json.Unmarshal(resp.Json, &result) if err != nil { return nil, err } return result, nil } ``` --- ## https://gofr.dev/docs/datasources/elasticsearch # Elasticsearch ## Configuration To connect to `Elasticsearch`, you need to provide the following environment variables: - `ADDRESSES`: Set of elasticsearch node URLs that the client will connect to. - `USERNAME`: The username for connecting to the database. - `PASSWORD`: The password for the specified user. ## Setup GoFr supports injecting Elasticsearch with an interface that defines the necessary methods for interacting with Elasticsearch. Any driver that implements the following interface can be added using the app.AddElasticsearch() method. ```go // Elasticsearch defines the methods for interacting with an Elasticsearch database. type Elasticsearch interface { // Connect initializes the Elasticsearch client with the provided configuration. Connect() // CreateIndex creates an index with specified settings. CreateIndex(ctx context.Context, index string, settings map[string]any) error // DeleteIndex removes an index from Elasticsearch. DeleteIndex(ctx context.Context, index string) error // IndexDocument creates or replaces a document in the specified index. IndexDocument(ctx context.Context, index, id string, document any) error // GetDocument retrieves a document by its ID. GetDocument(ctx context.Context, index, id string) (map[string]any, error) // UpdateDocument applies a partial update to an existing document. UpdateDocument(ctx context.Context, index, id string, update map[string]any) error // DeleteDocument removes a document from an index. DeleteDocument(ctx context.Context, index, id string) error // Search executes a search query against one or more indices. Search(ctx context.Context, indices []string, query map[string]any) (map[string]any, error) // Bulk executes multiple operations in a single API call. Bulk(ctx context.Context, operations []map[string]any) (map[string]any, error) // HealthCheck verifies connectivity to the Elasticsearch cluster. HealthChecker } ``` Users can easily inject a driver that supports this interface, allowing for flexibility without compromising usability. This structure supports all common Elasticsearch operations including indexing, searching, and document management. Import the gofr's external driver for Elasticsearch: ```shell go get gofr.dev/pkg/gofr/datasource/elasticsearch@latest ``` ### Example ```go package main import ( "context" "encoding/json" "net/http" "strings" "gofr.dev/pkg/gofr" "gofr.dev/pkg/gofr/datasource/elasticsearch" ) func main() { // Create a new application app := gofr.New() // Create Elasticsearch client with configuration. // ADDRESSES is a comma-separated list of node URLs (e.g. // "http://localhost:9200" or "http://es-1:9200,http://es-2:9200"). es := elasticsearch.New(elasticsearch.Config{ Addresses: strings.Split(app.Config.Get("ADDRESSES"), ","), Username: app.Config.Get("USERNAME"), Password: app.Config.Get("PASSWORD"), }) // Add Elasticsearch to the application app.AddElasticsearch(es) // Add routes for Elasticsearch operations app.POST("/documents", CreateDocumentHandler) app.GET("/documents/{id}", GetDocumentHandler) app.GET("/search", SearchDocumentsHandler) // Run the application app.Run() } // CreateDocumentHandler handles POST requests to create documents in Elasticsearch func CreateDocumentHandler(c *gofr.Context) (any, error) { // Parse request body var document map[string]any if err := json.NewDecoder(c.Request().Body).Decode(&document); err != nil { return nil, err } // Get document ID from request or generate one id := c.Param("id") if id == "" { id = c.Header("X-Document-ID") } // Index the document in Elasticsearch err := c.Elasticsearch.IndexDocument(c, "products", id, document) if err != nil { return nil, err } return map[string]string{"status": "document created", "id": id}, nil } // GetDocumentHandler handles GET requests to retrieve documents from Elasticsearch func GetDocumentHandler(c *gofr.Context) (any, error) { // Get document ID from URL parameter id := c.PathParam("id") if id == "" { return nil, gofr.NewError(http.StatusBadRequest, "document ID is required") } // Retrieve the document from Elasticsearch result, err := c.Elasticsearch.GetDocument(c, "products", id) if err != nil { return nil, err } return result["_source"], nil } // SearchDocumentsHandler handles GET requests to search documents in Elasticsearch func SearchDocumentsHandler(c *gofr.Context) (any, error) { query := c.Param("q") // Build search query searchQuery := map[string]any{ "query": map[string]any{ "multi_match": map[string]any{ "query": query, "fields": []string{"name", "description"}, }, }, } // Execute search result, err := c.Elasticsearch.Search(c, []string{"products"}, searchQuery) if err != nil { return nil, err } // Process and return search hits hits := result["hits"].(map[string]any)["hits"].([]any) documents := make([]map[string]any, len(hits)) for i, hit := range hits { hitMap := hit.(map[string]any) documents[i] = hitMap["_source"].(map[string]any) documents[i]["id"] = hitMap["_id"] } return documents, nil } ``` --- ## https://gofr.dev/docs/datasources/getting-started # Getting Started GoFr adopts an interface-driven architecture for datasource integration, providing a consistent way to work with various databases. Each datasource implements predefined interfaces that define core functionality, enabling you to inject any database client that satisfies these interface contracts. Users can inject any client that satisfies the base interface defined by GoFr, making it easy to swap out or add new datasources as needed. Keeping in mind the size of the framework in the final build, it felt counter-productive to keep all the database drivers within the framework itself. Keeping only the most used MySQL and Redis within the framework, users can now inject databases in the server that satisfies the base interface defined by GoFr. This helps in reducing the build size and in turn build time as unnecessary database drivers are not being compiled and added to the build. > We are planning to provide custom drivers for most common databases, and is in the pipeline for upcoming releases! ## Supported Databases {% table %} - Datasource - Health-Check - Logs - Metrics - Traces - Version-Migrations --- - MySQL - ✅ - ✅ - ✅ - ✅ - ✅ --- - REDIS - ✅ - ✅ - ✅ - ✅ - ✅ --- - PostgreSQL - ✅ - ✅ - ✅ - ✅ - ✅ --- - CockroachDB - ✅ - ✅ - ✅ - ✅ - ✅ --- - ArangoDB - ✅ - ✅ - ✅ - ✅ - ✅ --- - BadgerDB - ✅ - ✅ - ✅ - ✅ - --- - Cassandra - ✅ - ✅ - ✅ - ✅ - ✅ --- - ClickHouse - - ✅ - ✅ - ✅ - ✅ --- - DGraph - ✅ - ✅ - ✅ - ✅ - ✅ --- - MongoDB - ✅ - ✅ - ✅ - ✅ - ✅ --- - NATS KV - ✅ - ✅ - ✅ - ✅ - --- - OpenTSDB - ✅ - ✅ - - ✅ - --- - ScyllaDB - ✅ - ✅ - ✅ - ✅ - --- - Solr - - ✅ - ✅ - ✅ - --- - SQLite - ✅ - ✅ - ✅ - ✅ - ✅ --- - SurrealDB - ✅ - ✅ - ✅ - ✅ - ✅ --- - Elasticsearch - ✅ - ✅ - ✅ - ✅ - ✅ --- --- ## https://gofr.dev/docs/datasources/influxdb # InfluxDB GoFr supports injecting InfluxDB using an interface that defines the necessary methods to interact with InfluxDB v2+. Any driver that implements this interface can be injected via the `app.AddInfluxDB()` method. --- ## Interface ```go // InfluxDB defines the methods for interacting with an InfluxDB database. type InfluxDB interface { CreateOrganization(ctx context.Context, orgName string) (string, error) DeleteOrganization(ctx context.Context, orgID string) error ListOrganization(ctx context.Context) (map[string]string, error) CreateBucket(ctx context.Context, orgID, bucketName string) (string, error) DeleteBucket(ctx context.Context, bucketID string) error ListBuckets(ctx context.Context, org string) (map[string]string, error) Ping(ctx context.Context) (bool, error) HealthCheck(ctx context.Context) (any, error) Query(ctx context.Context, org string, fluxQuery string) ([]map[string]any, error) WritePoints(ctx context.Context, bucket string, org string, points []container.InfluxPoint) error } ``` This structure supports all essential InfluxDB operations including organization/bucket management, health checks, and metrics ingestion. Import the gofr's external driver for influxdb: ```bash go get gofr.dev/pkg/gofr/datasource/influxdb@latest ``` ## Example ```go package main import ( "context" "fmt" "time" "gofr.dev/pkg/gofr" "gofr.dev/pkg/gofr/datasource/influxdb" ) func main() { // Create a new GoFr application app := gofr.New() // Initialize InfluxDB client client := influxdb.New(influxdb.Config{ Url: "http://localhost:8086", Username: "admin", Password: "admin1234", Token: "", }) // Add InfluxDB to application context app.AddInfluxDB(client) // Sample route app.GET("/greet", func(ctx *gofr.Context) (any, error) { return "Hello World!", nil }) // Ping InfluxDB ok, err := client.Ping(context.Background()) if err != nil { app.Logger().Fatal(err) } app.Logger().Debug("InfluxDB connected: ", ok) // Create organization orgID, err := client.CreateOrganization(context.Background(), "demo-org") if err != nil { app.Logger().Fatal(err) } // List organizations orgs, _ := client.ListOrganization(context.Background()) app.Logger().Debug("Organizations: ") for id, name := range orgs { app.Logger().Debug(id, name) } // Create bucket bucketID, err := client.CreateBucket(context.Background(), orgID, "demo-bucket") if err != nil { app.Logger().Fatal(err) } // List buckets for organization buckets, err := client.ListBuckets(context.Background(), "demo-org") if err != nil { app.Logger().Fatal(err) } app.Logger().Debug("Buckets:", buckets) // Delete bucket if err := client.DeleteBucket(context.Background(), bucketID); err != nil { app.Logger().Fatal(err) } app.Logger().Debug("Bucket deleted successfully") // Delete organization if err := client.DeleteOrganization(context.Background(), orgID); err != nil { app.Logger().Fatal(err) } app.Logger().Debug("Organization deleted successfully") // Start the server app.Run() } ``` --- ## https://gofr.dev/docs/datasources/migrations/elasticsearch # Elasticsearch Migrations Elasticsearch migrations in **GoFr** let you manage index schemas, mappings, settings and data in a *version-controlled* manner. This guide explains how to implement and operate these migrations without breaking production. ## Overview Elasticsearch migrations help you: - Create and manage indices with proper mappings - Update index settings and configurations - Seed initial data or migrate existing data - Perform bulk operations efficiently - Maintain schema consistency across environments ## Migration Tracking GoFr automatically creates a `gofr_migrations` index in Elasticsearch to track applied migrations. The index stores: - Migration version (timestamp) - Execution method (UP) - Start time and duration - Migration status ## Basic Migration Structure ```go package main import ( "context" "gofr.dev/pkg/gofr" "gofr.dev/pkg/gofr/datasource/elasticsearch" "gofr.dev/pkg/gofr/migration" ) func main() { app := gofr.New() // Configure Elasticsearch esClient := elasticsearch.New(elasticsearch.Config{ Addresses: []string{"http://localhost:9200"}, }) app.AddElasticsearch(esClient) // Define migrations migrationsMap := map[int64]migration.Migrate{ 1640995200: { UP: func(d migration.Datasource) error { // Migration logic here return nil }, }, } // Register and run migrations app.Migrate(migrationsMap) app.Run() } ``` ## Available Operations ### Index Management ```go // Create an index with mappings and settings CreateIndex(ctx context.Context, index string, settings map[string]any) error // Delete an index DeleteIndex(ctx context.Context, index string) error ``` ### Document Operations ```go // Index a single document IndexDocument(ctx context.Context, index, id string, document any) error // Delete a document by ID DeleteDocument(ctx context.Context, index, id string) error // Bulk operations for multiple documents Bulk(ctx context.Context, operations []map[string]any) (map[string]any, error) ``` ## Migration Examples ### 1. Creating an Index with Mappings ```go 1640995200: { UP: func(d migration.Datasource) error { settings := map[string]any{ "mappings": map[string]any{ "properties": map[string]any{ "title": map[string]any{ "type": "text", "analyzer": "standard", }, "price": map[string]any{ "type": "float", }, "category": map[string]any{ "type": "keyword", }, "created_at": map[string]any{ "type": "date", }, "tags": map[string]any{ "type": "keyword", }, }, }, "settings": map[string]any{ "number_of_shards": 1, "number_of_replicas": 0, "analysis": map[string]any{ "analyzer": map[string]any{ "custom_text_analyzer": map[string]any{ "type": "standard", "stopwords": "_english_", }, }, }, }, } return d.Elasticsearch.CreateIndex(context.Background(), "products", settings) }, }, ``` ### 2. Seeding Initial Data ```go 1640995300: { UP: func(d migration.Datasource) error { // Create sample products products := []map[string]any{ { "title": "Laptop", "price": 999.99, "category": "electronics", "created_at": "2024-01-01T00:00:00Z", "tags": []string{"computer", "portable"}, }, { "title": "Coffee Mug", "price": 12.99, "category": "kitchen", "created_at": "2024-01-01T00:00:00Z", "tags": []string{"ceramic", "drink"}, }, } ctx := context.Background() for i, product := range products { err := d.Elasticsearch.IndexDocument( ctx, "products", fmt.Sprintf("%d", i+1), product, ) if err != nil { return fmt.Errorf("failed to index product %d: %w", i+1, err) } } return nil }, }, ``` ### 3. Bulk Operations Migration ```go 1640995400: { UP: func(d migration.Datasource) error { // Bulk index multiple documents efficiently operations := []map[string]any{ // Index operation metadata { "index": map[string]any{ "_index": "products", "_id": "bulk_1", }, }, // Document data { "title": "Bulk Product 1", "price": 19.99, "category": "bulk", }, // Another index operation { "index": map[string]any{ "_index": "products", "_id": "bulk_2", }, }, // Document data { "title": "Bulk Product 2", "price": 29.99, "category": "bulk", }, // Delete operation { "delete": map[string]any{ "_index": "products", "_id": "old_product", }, }, } ctx := context.Background() result, err := d.Elasticsearch.Bulk(ctx, operations) if err != nil { return fmt.Errorf("bulk operation failed: %w", err) } // Check for errors in bulk response if errors, ok := result["errors"].(bool); ok && errors { return fmt.Errorf("bulk operation had errors: %v", result) } return nil }, }, ``` ### 4. Index Settings Update ```go 1640995500: { UP: func(d migration.Datasource) error { // Create a new index with updated settings settings := map[string]any{ "mappings": map[string]any{ "properties": map[string]any{ "title": map[string]any{ "type": "text", "analyzer": "custom_text_analyzer", }, "description": map[string]any{ "type": "text", "analyzer": "standard", }, "price": map[string]any{ "type": "float", }, }, }, "settings": map[string]any{ "number_of_shards": 2, // Increased shards "number_of_replicas": 1, // Added replica "refresh_interval": "30s", }, } return d.Elasticsearch.CreateIndex(context.Background(), "products_v2", settings) }, }, ``` ### 5. Data Migration Between Indices ```go 1640995600: { UP: func(d migration.Datasource) error { ctx := context.Background() // This would typically involve: // 1. Reading data from old index (using Search - not shown in interface yet) // 2. Transforming data if needed // 3. Bulk indexing to new index // 4. Deleting old index // For now, we'll create the new index structure newSettings := map[string]any{ "mappings": map[string]any{ "properties": map[string]any{ "product_name": map[string]any{ // Renamed from 'title' "type": "text", }, "product_price": map[string]any{ // Renamed from 'price' "type": "float", }, "product_category": map[string]any{ // Renamed from 'category' "type": "keyword", }, }, }, } err := d.Elasticsearch.CreateIndex(ctx, "products_new_schema", newSettings) if err != nil { return fmt.Errorf("failed to create new schema index: %w", err) } // Clean up old index return d.Elasticsearch.DeleteIndex(ctx, "products_old") }, }, ``` ## Bulk Operations Format ### Index Operation ```go { "index": map[string]any{ "_index": "index_name", "_id": "document_id", }, } // Followed by document data { "field1": "value1", "field2": "value2", } ``` ### Update Operation ```go { "update": map[string]any{ "_index": "index_name", "_id": "document_id", }, } // Followed by update data { "doc": map[string]any{ "field1": "new_value1", }, } ``` ### Delete Operation ```go { "delete": map[string]any{ "_index": "index_name", "_id": "document_id", }, } // No document data needed for delete ``` ## Best Practices ### 1. Index Naming - Use descriptive names: `users`, `products`, `orders` - Consider versioning: `products_v1`, `products_v2` - Use consistent naming conventions ### 2. Mapping Design - Define explicit mappings rather than relying on dynamic mapping - Choose appropriate field types - Consider analyzer requirements for text fields - Plan for future field additions ### 3. Settings Configuration - Set appropriate shard and replica counts - Configure refresh intervals based on use case - Set up custom analyzers if needed ### 4. Migration Safety - Test migrations on non-production data first - Use bulk operations for large data sets - Implement proper error handling - Consider index aliases for zero-downtime migrations ### 5. Performance Considerations - Use bulk operations for multiple documents - Batch operations appropriately (1 000 – 5 000 docs per batch) - Monitor cluster health during migrations - Consider disabling replicas during large data migrations ## Error Handling ```go UP: func(d migration.Datasource) error { ctx := context.Background() // Check if index already exists (idempotent migration) settings := map[string]any{ "mappings": map[string]any{ "properties": map[string]any{ "name": map[string]any{"type": "text"}, }, }, } err := d.Elasticsearch.CreateIndex(ctx, "users", settings) if err != nil { // Handle specific Elasticsearch errors if strings.Contains(err.Error(), "resource_already_exists_exception") { // Index already exists, this is okay return nil } return fmt.Errorf("failed to create users index: %w", err) } return nil }, ``` ## Monitoring Migration Logs ```plaintext INFO [15:09:13] running migration 1640995200 DEBU [15:09:13] CREATE INDEX products ELASTIC 215759µs products {"mappings":{"properties":{"price":{"type":"float"},"title":{"type":"text"}}},"settings":{"number_of_replicas":0,"number_of_shards":1}} DEBU [15:09:13] INDEX DOCUMENT products/1 ELASTIC 87374µs 1 {"price":19.99,"title":"Sample Product"} ``` The logs show: - **Operation type** – CREATE INDEX, INDEX DOCUMENT, BULK, etc. - **Execution time** – In microseconds - **Target** – Index name, document ID - **Query/Data** – Full JSON of the operation (no base64 encoding) ## Complete Example ```go package main import ( "context" "fmt" "os" "gofr.dev/pkg/gofr" "gofr.dev/pkg/gofr/datasource/elasticsearch" "gofr.dev/pkg/gofr/migration" ) func main() { app := gofr.New() // Configure Elasticsearch esURL := os.Getenv("ELASTICSEARCH_URL") if esURL == "" { esURL = "http://localhost:9200" } esClient := elasticsearch.New(elasticsearch.Config{ Addresses: []string{esURL}, }) app.AddElasticsearch(esClient) // Define migrations migrationsMap := map[int64]migration.Migrate{ // Create users index 1640995200: { UP: func(d migration.Datasource) error { settings := map[string]any{ "mappings": map[string]any{ "properties": map[string]any{ "name": map[string]any{"type": "keyword"}, "email": map[string]any{"type": "keyword"}, "age": map[string]any{"type": "integer"}, }, }, } return d.Elasticsearch.CreateIndex(context.Background(), "users", settings) }, }, // Seed initial users 1640995300: { UP: func(d migration.Datasource) error { users := []map[string]any{ {"name": "Alice", "email": "alice@example.com", "age": 30}, {"name": "Bob", "email": "bob@example.com", "age": 25}, } ctx := context.Background() for i, user := range users { err := d.Elasticsearch.IndexDocument( ctx, "users", fmt.Sprintf("%d", i+1), user, ) if err != nil { return err } } return nil }, }, // Bulk add more users 1640995400: { UP: func(d migration.Datasource) error { operations := []map[string]any{ {"index": map[string]any{"_index": "users", "_id": "3"}}, {"name": "Carol", "email": "carol@example.com", "age": 28}, {"index": map[string]any{"_index": "users", "_id": "4"}}, {"name": "David", "email": "david@example.com", "age": 35}, } _, err := d.Elasticsearch.Bulk(context.Background(), operations) return err }, }, } // Run migrations app.Migrate(migrationsMap) // Add API endpoints app.GET("/users", getUsersHandler) app.Run() } func getUsersHandler(ctx *gofr.Context) (any, error) { query := map[string]any{ "query": map[string]any{"match_all": map[string]any{}}, "size": 10, } result, err := ctx.Container.Elasticsearch.Search( ctx.Context, []string{"users"}, query, ) if err != nil { return nil, err } return result, nil } ``` **Enjoy consistent, version-controlled Elasticsearch migrations with GoFr!** --- ## https://gofr.dev/docs/datasources/mongodb # MongoDB ## Configuration To connect to `MongoDB`, you need to provide the following environment variables: - `URI`: Mongodb server URL that the client connects to. - `DATABASE`: The name of the database to connect to. - `CONNECTIONTIMEOUT`: The maximum time the client will wait while trying to establish a connection. ## Setup GoFr supports injecting MongoDB that supports the following interface. Any driver that implements the interface can be added using `app.AddMongo()` method, and users can use MongoDB across application with `gofr.Context`. ```go type Mongo interface { Find(ctx context.Context, collection string, filter any, results any) error FindOne(ctx context.Context, collection string, filter any, result any) error InsertOne(ctx context.Context, collection string, document any) (any, error) InsertMany(ctx context.Context, collection string, documents []any) ([]any, error) DeleteOne(ctx context.Context, collection string, filter any) (int64, error) DeleteMany(ctx context.Context, collection string, filter any) (int64, error) UpdateByID(ctx context.Context, collection string, id any, update any) (int64, error) UpdateOne(ctx context.Context, collection string, filter any, update any) error UpdateMany(ctx context.Context, collection string, filter any, update any) (int64, error) CountDocuments(ctx context.Context, collection string, filter any) (int64, error) Drop(ctx context.Context, collection string) error } ``` Users can easily inject a driver that supports this interface; this provides usability without compromising the extensibility to use multiple databases. Import the gofr's external driver for MongoDB: ```shell go get gofr.dev/pkg/gofr/datasource/mongo@latest ``` ### Example ```go package main import ( "time" "go.mongodb.org/mongo-driver/bson" "gofr.dev/pkg/gofr/datasource/mongo" "gofr.dev/pkg/gofr" ) type Person struct { Name string `bson:"name" json:"name"` Age int `bson:"age" json:"age"` City string `bson:"city" json:"city"` } func main() { app := gofr.New() db := mongo.New(mongo.Config{URI: app.Config.Get("URI"), Database: app.Config.Get("DATABASE"), ConnectionTimeout: app.Config.Get("CONNECTIONTIMEOUT")}) // inject the mongo into gofr to use mongoDB across the application // using gofr context app.AddMongo(db) app.POST("/mongo", Insert) app.GET("/mongo/{name}", Get) app.Run() } func Insert(ctx *gofr.Context) (any, error) { var p Person err := ctx.Bind(&p) if err != nil { return nil, err } res, err := ctx.Mongo.InsertOne(ctx, "collection", p) if err != nil { return nil, err } return res, nil } func Get(ctx *gofr.Context) (any, error) { var result Person p := ctx.PathParam("name") err := ctx.Mongo.FindOne(ctx, "collection", bson.D{{"name", p}} /* valid filter */, &result) if err != nil { return nil, err } return result, nil } ``` --- ## https://gofr.dev/docs/datasources/opentsdb # OpenTSDB ## Configuration To connect to `OpenTSDB`, you need to provide the following environment variables: - `HOSTS`: The hostname or IP address of your OpenTSDB server. - `MAXCONTENTLENGTH`: Max length of the request body in bytes. - `MAXPUTPOINTSNUM`: Max number of data points that can be sent in a single `PUT` request. - `DETECTDELTANUM`: The number of data points that OpenTSDB looks at to spot unusual time gaps. ## Setup GoFr supports injecting OpenTSDB to facilitate interaction with OpenTSDB's REST APIs. Implementations adhering to the `OpenTSDB` interface can be registered with `app.AddOpenTSDB()`, enabling applications to leverage OpenTSDB for time-series data management through `gofr.Context`. ```go // OpenTSDB provides methods for GoFr applications to communicate with OpenTSDB // through its REST APIs. type OpenTSDB interface { // HealthChecker verifies if the OpenTSDB server is reachable. // Returns an error if the server is unreachable, otherwise nil. HealthChecker // PutDataPoints sends data to store metrics in OpenTSDB. // // Parameters: // - ctx: Context for managing request lifetime. // - data: A slice of DataPoint objects; must contain at least one entry. // - queryParam: Specifies the response format: // - client.PutRespWithSummary: Requests a summary response. // - client.PutRespWithDetails: Requests detailed response information. // - Empty string (""): No additional response details. // // - res: A pointer to PutResponse, where the server's response will be stored. // // Returns: // - Error if parameters are invalid, response parsing fails, or if connectivity issues occur. PutDataPoints(ctx context.Context, data any, queryParam string, res any) error // QueryDataPoints retrieves data based on the specified parameters. // // Parameters: // - ctx: Context for managing request lifetime. // - param: An instance of QueryParam with query parameters for filtering data. // - res: A pointer to QueryResponse, where the server's response will be stored. QueryDataPoints(ctx context.Context, param any, res any) error // QueryLatestDataPoints fetches the latest data point(s). // // Parameters: // - ctx: Context for managing request lifetime. // - param: An instance of QueryLastParam with query parameters for the latest data point. // - res: A pointer to QueryLastResponse, where the server's response will be stored. QueryLatestDataPoints(ctx context.Context, param any, res any) error // GetAggregators retrieves available aggregation functions. // // Parameters: // - ctx: Context for managing request lifetime. // - res: A pointer to AggregatorsResponse, where the server's response will be stored. GetAggregators(ctx context.Context, res any) error // QueryAnnotation retrieves a single annotation. // // Parameters: // - ctx: Context for managing request lifetime. // - queryAnnoParam: A map of parameters for the annotation query, such as client.AnQueryStartTime, client.AnQueryTSUid. // - res: A pointer to AnnotationResponse, where the server's response will be stored. QueryAnnotation(ctx context.Context, queryAnnoParam map[string]any, res any) error // PostAnnotation creates or updates an annotation. // // Parameters: // - ctx: Context for managing request lifetime. // - annotation: The annotation to be created or updated. // - res: A pointer to AnnotationResponse, where the server's response will be stored. PostAnnotation(ctx context.Context, annotation any, res any) error // PutAnnotation creates or replaces an annotation. // Fields not included in the request will be reset to default values. // // Parameters: // - ctx: Context for managing request lifetime. // - annotation: The annotation to be created or replaced. // - res: A pointer to AnnotationResponse, where the server's response will be stored. PutAnnotation(ctx context.Context, annotation any, res any) error // DeleteAnnotation removes an annotation. // // Parameters: // - ctx: Context for managing request lifetime. // - annotation: The annotation to be deleted. // - res: A pointer to AnnotationResponse, where the server's response will be stored. DeleteAnnotation(ctx context.Context, annotation any, res any) error } ``` Import the gofr's external driver for OpenTSDB: ```go go get gofr.dev/pkg/gofr/datasource/opentsdb ``` The following example demonstrates injecting an OpenTSDB instance into a GoFr application and using it to perform a health check on the OpenTSDB server. ```go package main import ( "context" "fmt" "math/rand/v2" "time" "gofr.dev/pkg/gofr" "gofr.dev/pkg/gofr/datasource/opentsdb" ) func main() { app := gofr.New() // Initialize OpenTSDB connection app.AddOpenTSDB(opentsdb.New(opentsdb.Config{ Host: app.Config.Get("HOST"), MaxContentLength: app.Config.Get("MAXCONTENTLENGTH"), MaxPutPointsNum: app.Config.Get("MAXPUTPOINTSNUM"), DetectDeltaNum: app.Config.Get("DETECTDELTANUM"), })) // Register routes app.GET("/health", opentsdbHealthCheck) app.POST("/write", writeDataPoints) app.GET("/query", queryDataPoints) // Run the app app.Run() } // Health check for OpenTSDB func opentsdbHealthCheck(c *gofr.Context) (any, error) { res, err := c.OpenTSDB.HealthCheck(context.Background()) if err != nil { return nil, err } return res, nil } // Write Data Points to OpenTSDB func writeDataPoints(c *gofr.Context) (any, error) { PutDataPointNum := 4 name := []string{"cpu", "disk", "net", "mem"} cpuDatas := make([]opentsdb.DataPoint, 0) tags := map[string]string{ "host": "gofr-host", "try-name": "gofr-sample", "demo-name": "opentsdb-test", } for i := 0; i < PutDataPointNum; i++ { data := opentsdb.DataPoint{ Metric: name[i%len(name)], Timestamp: time.Now().Unix(), Value: rand.Float64() * 100, Tags: tags, } cpuDatas = append(cpuDatas, data) } resp := opentsdb.PutResponse{} err := c.OpenTSDB.PutDataPoints(context.Background(), cpuDatas, "details", &resp) if err != nil { return resp.Errors, err } return fmt.Sprintf("%v Data points written successfully", resp.Success), nil } // Query Data Points from OpenTSDB func queryDataPoints(c *gofr.Context) (any, error) { st1 := time.Now().Unix() - 3600 st2 := time.Now().Unix() queryParam := opentsdb.QueryParam{ Start: st1, End: st2, } name := []string{"cpu", "disk", "net", "mem"} subqueries := make([]opentsdb.SubQuery, 0) tags := map[string]string{ "host": "gofr-host", "try-name": "gofr-sample", "demo-name": "opentsdb-test", } for _, metric := range name { subQuery := opentsdb.SubQuery{ Aggregator: "sum", Metric: metric, Tags: tags, } subqueries = append(subqueries, subQuery) } queryParam.Queries = subqueries queryResp := &opentsdb.QueryResponse{} err := c.OpenTSDB.QueryDataPoints(c, &queryParam, queryResp) if err != nil { return nil, err } return queryResp.QueryRespCnts, nil } ``` --- ## https://gofr.dev/docs/datasources/oracle # OracleDB ## Configuration To connect to `OracleDB`, you need to provide the following environment variables: - `HOST`: The hostname or IP address of your OracleDB server. - `PORT`: The port number. - `USERNAME`: The username for connecting to the database. - `PASSWORD`: The password for the specified user. - `SERVICE`: The specific Oracle database instance or service on the server that the client should connect to. ## Setup GoFr supports injecting OracleDB as a relational datasource through a clean, extensible interface. Any driver that implements the following interface can be added using the `app.AddOracle()` method, and users can access OracleDB throughout their application via `gofr.Context`. ```go type Oracle interface { Exec(ctx context.Context, query string, args ...any) error Select(ctx context.Context, dest any, query string, args ...any) error } ``` This approach allows users to easily inject any compatible Oracle driver, providing both usability and the flexibility to use multiple databases in a GoFr application. ## Important: Oracle Database Must Exist **Before running your GoFr application, you must ensure that the Oracle database and the required schema (such as the `users` table) are already created.** - Oracle does not allow creating a database (PDB or CDB) via a simple SQL query from a standard client connection. - You must use Oracle tools (like DBCA, SQL\*Plus as SYSDBA, or Docker container initialization) to create the database and pluggable database (PDB) before connecting your app. - Your application can create tables within an existing schema, but the database itself must be provisioned in advance. ## Setting Up OracleDB with Docker To help new users, the following steps outline how to quickly set up an OracleDB instance using Docker. ### 1. Prerequisites - **Docker** installed on your system. - An **Oracle account** (free) with access to the Oracle Container Registry. ### 2. Create Your Oracle Account Visit the Oracle Container Registry and create or sign in to your account: 👉 [https://container-registry.oracle.com/ords/f?p=113:10:14574461221664:::::](https://container-registry.oracle.com/ords/f?p=113:10:14574461221664:::::) ### 3. Pull the Oracle Free Database Docker Image In your terminal: 1. Log in to the Oracle Container Registry using your Oracle account credentials: ```sh docker login container-registry.oracle.com ``` 2. After login, pull the Oracle Free Database image: ```sh docker pull container-registry.oracle.com/database/free:latest ``` ### 4. Run the Oracle Database Container You can now run the OracleDB container (replace `YourPasswordHere` with a suitable strong password): ```sh docker run -d --name oracle-free -p 1521:1521 -e ORACLE_PWD=YourPasswordHere container-registry.oracle.com/database/free:latest ``` - The database will be available on port **1521** - The default Pluggable Database (PDB) is **FREEPDB1** - The `system` user password is your `ORACLE_PWD` - The service name for connecting is `FREEPDB1` You can verify the container is running: ```sh docker ps ``` ### 5. Connect to the Oracle Database Option 1: Direct SQL\*Plus session from within the container: ```sh docker exec -it oracle-free sqlplus system/YourPasswordHere@localhost:1521/FREEPDB1 ``` Option 2: Open bash shell inside the container and use SQL\*Plus from there: ```sh docker exec -it oracle-free bash sqlplus system/YourPasswordHere@localhost:1521/FREEPDB1 ``` ### 6. Create the `users` Table Based on the Go struct: ```go type User struct { Id string `db:"ID"` Name string `db:"NAME"` Age int `db:"AGE"` } ``` Run the following SQL command in SQL\*Plus: ```sql CREATE TABLE users ( id VARCHAR2(36) PRIMARY KEY, name VARCHAR2(100), age NUMBER ); ``` This will create the required table for the GoFr application to interact with. ### 7. Sample OracleDB Config for GoFr | Setting | Value | | :---------- | :----------------- | | host | `localhost` | | port | `1521` | | username | `system` | | password | `YourPasswordHere` | | service/SID | `FREEPDB1` | ## Import the GoFr External Driver for OracleDB ```bash go get gofr.dev/pkg/gofr/datasource/oracle@latest ``` ## Example ```go package main import ( "gofr.dev/pkg/gofr" "gofr.dev/pkg/gofr/datasource/oracle" ) type User struct { Id string `db:"ID"` Name string `db:"NAME"` Age int `db:"AGE"` } func main() { app := gofr.New() app.AddOracle(oracle.New(oracle.Config{ Host: app.Config.Get("HOST"), Port: app.Config.Get("PORT"), Username: app.Config.Get("USERNAME"), Password: app.Config.Get("PASSWORD") Service: app.Config.Get("SERVICE"), })) app.POST("/user", Post) app.GET("/user", Get) app.Run() } func Post(ctx *gofr.Context) (any, error) { err := ctx.Oracle.Exec(ctx, "INSERT INTO users (id, name, age) VALUES (:1, :2, :3)", "8f165e2d-feef-416c-95f6-913ce3172e15", "aryan", 10) if err != nil { return nil, err } return "successfully inserted", nil } func Get(ctx *gofr.Context) (any, error) { var users []map[string]any err := ctx.Oracle.Select(ctx, &users, "SELECT id, name, age FROM users") if err != nil { return nil, err } return users, nil } ``` ## Example API Usage You can create a user and get users using the following commands on the command prompt: - **Create a user:** ```sh curl -X POST http://localhost:8000/user ``` - **Get all users:** ```sh curl http://localhost:8000/user ``` --- ## https://gofr.dev/docs/datasources/scylladb # ScyllaDB ## Configuration To connect to `ScyllaDB`, you need to provide the following environment variables: - `HOST`: The hostname or IP address of your ScyllaDB server. - `KEYSPACE`: The top level namespace. - `PORT`: The port number. - `USERNAME`: The username for connecting to the database. - `PASSWORD`: The password for the specified user. ## Setup GoFr supports pluggable ScyllaDB drivers. It defines an interface that specifies the required methods for interacting with ScyllaDB. Any driver implementation that adheres to this interface can be integrated into GoFr using the `app.AddScyllaDB()` method. ```go type ScyllaDB interface { // Query executes a CQL (Cassandra Query Language) query on the ScyllaDB cluster // and stores the result in the provided destination variable `dest`. // Accepts pointer to struct or slice as dest parameter for single and multiple Query(dest any, stmt string, values ...any) error // QueryWithCtx executes the query with a context and binds the result into dest parameter. // Accepts pointer to struct or slice as dest parameter for single and multiple rows retrieval respectively. QueryWithCtx(ctx context.Context, dest any, stmt string, values ...any) error // Exec executes a CQL statement (e.g., INSERT, UPDATE, DELETE) on the ScyllaDB cluster without returning any result. Exec(stmt string, values ...any) error // ExecWithCtx executes a CQL statement with the provided context and without returning any result. ExecWithCtx(ctx context.Context, stmt string, values ...any) error // ExecCAS executes a lightweight transaction (i.e. an UPDATE or INSERT statement containing an IF clause). // If the transaction fails because the existing values did not match, the previous values will be stored in dest. // Returns true if the query is applied otherwise false. // Returns false and error if any error occur while executing the query. // Accepts only pointer to struct and built-in types as the dest parameter. ExecCAS(dest any, stmt string, values ...any) (bool, error) // NewBatch initializes a new batch operation with the specified name and batch type. NewBatch(name string, batchType int) error // NewBatchWithCtx takes context,name and batchtype and return error. NewBatchWithCtx(_ context.Context, name string, batchType int) error // BatchQuery executes a batch query in the ScyllaDB cluster with the specified name, statement, and values. BatchQuery(name, stmt string, values ...any) error // BatchQueryWithCtx executes a batch query with the provided context. BatchQueryWithCtx(ctx context.Context, name, stmt string, values ...any) error // ExecuteBatchWithCtx executes a batch with context and name returns error. ExecuteBatchWithCtx(ctx context.Context, name string) error // HealthChecker defines the HealthChecker interface. HealthChecker } ``` Import the gofr's external driver for ScyllaDB: ```shell go get gofr.dev/pkg/gofr/datasource/scylladb ``` ```go package main import ( "github.com/gocql/gocql" "gofr.dev/pkg/gofr" "gofr.dev/pkg/gofr/datasource/scylladb" "gofr.dev/pkg/gofr/http" ) type User struct { ID gocql.UUID `json:"id"` Name string `json:"name"` Email string `json:"email"` } func main() { app := gofr.New() client := scylladb.New(scylladb.Config{ Host: app.Config.Get("HOST"), Keyspace: app.Config.Get("KEYSPACE"), Port: app.Config.Get("PORT"), Username: app.Config.Get("USERNAME"), Password: app.Config.Get("PASSWORD"), }) app.AddScyllaDB(client) app.GET("/users/{id}", getUser) app.POST("/users", addUser) app.Run() } func addUser(c *gofr.Context) (any, error) { var newUser User err := c.Bind(&newUser) if err != nil { return nil, err } _ = c.ScyllaDB.ExecWithCtx(c, `INSERT INTO users (user_id, username, email) VALUES (?, ?, ?)`, newUser.ID, newUser.Name, newUser.Email) return newUser, nil } func getUser(c *gofr.Context) (any, error) { var user User id := c.PathParam("id") userID, err := gocql.ParseUUID(id) if err != nil { c.Logger.Error("Invalid UUID format:", err) return nil, err } err = c.ScyllaDB.QueryWithCtx(c, &user, "SELECT id, name, email FROM users WHERE id = ?", userID) if err != nil { c.Logger.Error("Error querying user:", err) return nil, err } return user, nil } ``` --- ## https://gofr.dev/docs/datasources/solr # Solr ## Configuration To connect to `Solr` DB, you need to provide the following environment variables: - `HOST`: The hostname or IP address of your Solr DB server. - `PORT`: The port number. ## Setup GoFr supports injecting Solr database that supports the following interface. Any driver that implements the interface can be added using `app.AddSolr()` method, and user's can use Solr DB across application with `gofr.Context`. ```go type Solr interface { Search(ctx context.Context, collection string, params map[string]any) (any, error) Create(ctx context.Context, collection string, document *bytes.Buffer, params map[string]any) (any, error) Update(ctx context.Context, collection string, document *bytes.Buffer, params map[string]any) (any, error) Delete(ctx context.Context, collection string, document *bytes.Buffer, params map[string]any) (any, error) Retrieve(ctx context.Context, collection string, params map[string]any) (any, error) ListFields(ctx context.Context, collection string, params map[string]any) (any, error) AddField(ctx context.Context, collection string, document *bytes.Buffer) (any, error) UpdateField(ctx context.Context, collection string, document *bytes.Buffer) (any, error) DeleteField(ctx context.Context, collection string, document *bytes.Buffer) (any, error) } ``` User's can easily inject a driver that supports this interface, this provides usability without compromising the extensibility to use multiple databases. Import the gofr's external driver for Solr: ```shell go get gofr.dev/pkg/gofr/datasource/solr@latest ``` Note : This datasource package requires the user to create the collection before performing any operations. While testing the below code create a collection using : `curl --location 'http://localhost:2020/solr/admin/collections?action=CREATE&name=test&numShards=2&replicationFactor=1&wt=xml'` ```go package main import ( "bytes" "encoding/json" "errors" "gofr.dev/pkg/gofr" "gofr.dev/pkg/gofr/datasource/solr" ) func main() { app := gofr.New() app.AddSolr(solr.New(solr.Config{ Host: app.Config.Get("HOST"), Port: app.Config.Get("PORT"), })) app.POST("/solr", post) app.GET("/solr", get) app.Run() } type Person struct { Name string Age int } func post(c *gofr.Context) (any, error) { p := []Person{{Name: "Srijan", Age: 24}} body, _ := json.Marshal(p) resp, err := c.Solr.Create(c, "test", bytes.NewBuffer(body), nil) if err != nil { return nil, err } return resp, nil } func get(c *gofr.Context) (any, error) { resp, err := c.Solr.Search(c, "test", nil) if err != nil { return nil, err } res, ok := resp.(solr.Response) if !ok { return nil, errors.New("invalid response type") } b, _ := json.Marshal(res.Data) err = json.Unmarshal(b, &Person{}) if err != nil { return nil, err } return resp, nil } ``` --- ## https://gofr.dev/docs/datasources/surrealdb # SurrealDB ## Configuration To connect to `SurrealDB`, you need to provide the following environment variables: - `HOST`: The hostname or IP address of your SurrealDB server. - `PORT`: The port number. - `USERNAME`: The username for connecting to the database. - `PASSWORD`: The password for the specified user. - `NAMESPACE`: Top level container in SurrealDB that groups databases. - `DATABASE`: The name of the database to connect to. - `TLSENABLED`: TLS mode (e.g., disable, require) ## Setup GoFr supports injecting SurrealDB database that supports the following interface. Any driver that implements the interface can be added using `app.AddSurrealDB()` method, and users can use Surreal DB across application through the `gofr.Context`. ```go // SurrealDB defines an interface representing a SurrealDB client with common database operations. type SurrealDB interface { // Query executes a Surreal query with the provided variables and returns the query results as a slice of interfaces{}. // It returns an error if the query execution fails. Query(ctx context.Context, query string, vars map[string]any) ([]any, error) // Create inserts a new record into the specified table and returns the created record as a map. // It returns an error if the operation fails. Create(ctx context.Context, table string, data any) (map[string]any, error) // Update modifies an existing record in the specified table by its ID with the provided data. // It returns the updated record as an interface and an error if the operation fails. Update(ctx context.Context, table string, id string, data any) (any, error) // Delete removes a record from the specified table by its ID. // It returns the result of the delete operation as an interface and an error if the operation fails. Delete(ctx context.Context, table string, id string) (any, error) // Select retrieves all records from the specified table. // It returns a slice of maps representing the records and an error if the operation fails. Select(ctx context.Context, table string) ([]map[string]any, error) HealthChecker } // SurrealDBProvider is an interface that extends SurrealDB with additional methods for logging, metrics, or connection management. // It is typically used for initializing and managing SurrealDB-based data sources. type SurrealDBProvider interface { SurrealDB provider } ``` Import the gofr's external driver for SurrealDB: ```shell go get gofr.dev/pkg/gofr/datasource/surrealdb ``` The following example demonstrates injecting an SurrealDB instance into a GoFr application. ```go package main import ( "gofr.dev/pkg/gofr" "gofr.dev/pkg/gofr/datasource/surrealdb" ) type Person struct { ID string `json:"id,omitempty"` Name string `json:"name"` Age int `json:"age"` Email string `json:"email,omitempty"` } type ErrorResponse struct { Message string `json:"message"` } func main() { app := gofr.New() client := surrealdb.New(&surrealdb.Config{ Host: app.Config.Get("HOST"), Port: app.Config.Get("PORT"), Username: app.Config.Get("USERNAME"), Password: app.Config.Get("PASSWORD"), Namespace: app.Config.Get("NAMESPACE"), Database: app.Config.Get("DATABASE"), TLSEnabled: app.Config.Get("TLSENABLED"), }) app.AddSurrealDB(client) // GET request to fetch person by ID app.GET("/person/{id}", func(ctx *gofr.Context) (any, error) { id := ctx.PathParam("id") query := "SELECT * FROM type::thing('person', $id)" vars := map[string]any{ "id": id, } result, err := ctx.SurrealDB.Query(ctx, query, vars) if err != nil { return nil, err } return result, nil }) // POST request to create a new person app.POST("/person", func(ctx *gofr.Context) (any, error) { var person Person if err := ctx.Bind(&person); err != nil { return ErrorResponse{Message: "Invalid request body"}, nil } result, err := ctx.SurrealDB.Create(ctx, "person", map[string]any{ "name": person.Name, "age": person.Age, "email": person.Email, }) if err != nil { return nil, err } return result, nil }) app.Run() } ``` --- # Production guides ## https://gofr.dev/docs/guides/auth-in-kubernetes # Auth in Kubernetes {% answer %} GoFr supports Basic Auth, API key auth, and OAuth 2.0 JWT validation against a JWKS endpoint (`EnableOAuth(jwksEndpoint, refreshInterval, options...)`). In Kubernetes, point the JWKS URL at your IdP (cluster-internal Service or public URL), inject API keys via Vault Agent or sealed Secrets, and prefer mTLS or JWT over static keys for service-to-service calls. {% /answer %} ## What GoFr provides Three authentication categories are exposed on the App, all verified in `pkg/gofr/auth.go` — Basic auth, API-key auth, and OAuth/JWT — each with a static-credentials variant and a custom-validator variant: - `EnableBasicAuth(credentials...)` — pairs of username/password. - `EnableBasicAuthWithValidator(fn)` — custom validator with access to the container. - `EnableAPIKeyAuth(keys...)` — `X-Api-Key` header check. - `EnableAPIKeyAuthWithValidator(fn)` — custom validator. - `EnableOAuth(jwksEndpoint, refreshIntervalSeconds, options ...jwt.ParserOption)` — JWT validation with periodic JWKS refresh. A single call enables auth on both HTTP and gRPC. The entire `/.well-known/*` prefix (including `/.well-known/alive` and `/.well-known/health`) is auth-exempt by default — see `pkg/gofr/http/middleware/validate.go`. Re-protect `health` if you publish it externally. For full code examples, see [Authentication](/docs/advanced-guide/authentication). ## OAuth 2.0 with JWKS in Kubernetes `EnableOAuth` registers an internal HTTP service named `gofr_oauth` to fetch keys, then validates JWTs on every request. Two deployment patterns: ### Public IdP (Auth0, Okta, Google, Azure AD) ```go app.EnableOAuth("https://your-tenant.auth0.com/.well-known/jwks.json", 3600, jwt.WithAudience("https://api.example.com"), jwt.WithIssuer("https://your-tenant.auth0.com/"), jwt.WithExpirationRequired()) ``` Egress from your cluster must be allowed to reach the IdP. If you have a strict NetworkPolicy, allowlist the IdP CIDR or use a forward proxy. ### Cluster-internal IdP (Keycloak, Dex, Hydra) If your IdP runs in the same cluster, point at its in-cluster Service DNS: ```go app.EnableOAuth("http://keycloak.iam.svc.cluster.local:8080/realms/prod/protocol/openid-connect/certs", 3600) ``` The JWKS fetch is cheap, and the `refreshInterval` controls how stale your key cache can be. A typical value is 600–3600 seconds. After key rotation by the IdP, requests with old tokens fail until the cache refreshes. ## Storing API keys: Vault `EnableAPIKeyAuth` takes the keys directly. In production, source them from a secret manager: - **Vault Agent sidecar** — mounts a rendered file or sets env vars at pod start, refreshing on a schedule. Inject via the `vault.hashicorp.com/agent-inject: "true"` annotation and read with `app.Config.Get`. - **External Secrets Operator** — syncs Vault, AWS Secrets Manager, or GCP Secret Manager into a Kubernetes Secret. Mount as env vars. - **Sealed Secrets** — fine for low-rotation keys committed to GitOps repos. Avoid bundling API keys into ConfigMaps or container images. ## Service-to-service auth: pick one model For internal calls between GoFr services, three reasonable models: 1. **mTLS via service mesh** — Istio `PeerAuthentication: STRICT` or Linkerd automatic mTLS. No GoFr code change needed. Strongest identity, requires mesh ops. 2. **JWT** — the calling service obtains a token (client credentials or workload identity) and the receiving GoFr service uses `EnableOAuth`. Works without a mesh. 3. **Shared API key** — simple but rotation-heavy and gives no per-caller identity. Acceptable for low-trust internal endpoints. Avoid mixing on a single endpoint. Pick one per trust boundary. ## Refresh strategy For OAuth, GoFr refreshes JWKS on the interval you pass. The receiving service does not refresh user tokens — that is the client's responsibility. For long-lived clients (cron jobs, batch workers), refresh ahead of expiry rather than on 401. ## Accessing claims in handlers Once OAuth is enabled, `ctx.GetAuthInfo().GetClaims()` returns the parsed claim map. Cast specific claims as needed: ```go claims := ctx.GetAuthInfo().GetClaims() userID, _ := claims["sub"].(string) ``` ## Liveness must stay open Kubernetes liveness probes fire from the kubelet without credentials. GoFr exempts `/.well-known/alive` from auth so probes succeed. Do not put auth in front of the alive endpoint via an Ingress filter. ## Health endpoint `/.well-known/health` is exempted by default but reveals dependency status. In production, re-enable auth on it (or restrict via NetworkPolicy to your monitoring namespace) so it is not enumerable from the public internet. ## TLS Always serve credentials and tokens over TLS. Inside the cluster, the mesh (or `CERT_FILE` / `KEY_FILE` configured directly on GoFr) terminates TLS; on the edge, the Ingress does. {% faq %} {% faq-item question="Can I run JWKS-based JWT auth without a service mesh?" %} Yes. `EnableOAuth` just needs an HTTP-reachable JWKS endpoint. The mesh is optional and adds mTLS at the network layer. {% /faq-item %} {% faq-item question="Where should API keys be stored?" %} In a secret manager (Vault, AWS/GCP Secrets Manager) and surfaced to the pod as environment variables via Vault Agent or External Secrets Operator. Never in ConfigMaps or container images. {% /faq-item %} {% faq-item question="Does enabling auth in GoFr also protect gRPC?" %} Yes. A single call to `EnableBasicAuth`, `EnableAPIKeyAuth`, or `EnableOAuth` registers middleware on both the HTTP and gRPC servers — verified in `pkg/gofr/auth.go`. {% /faq-item %} {% /faq %} --- ## https://gofr.dev/docs/guides/cicd-recipes # CI/CD Recipes {% answer %} A GoFr CI pipeline is a standard Go pipeline plus a container build: lint, `go test`, build a versioned Docker image, push to a registry, then deploy via Helm or `kubectl apply`. Use OIDC for cloud auth, cache Go modules and build output, and tag images with both a short SHA and a semver. {% /answer %} ## GitHub Actions: end-to-end workflow ```yaml name: ci-cd on: push: branches: [main] pull_request: concurrency: group: ${{ github.workflow }}-${{ github.ref }} cancel-in-progress: true permissions: contents: read id-token: write # for OIDC to cloud packages: write # for GHCR push jobs: test: runs-on: ubuntu-latest steps: - uses: actions/checkout@v4 - uses: actions/setup-go@v5 with: go-version: '1.25' # gofr.dev requires Go >= 1.25 (per its go.mod). Alternatively use `go-version-file: go.mod` to auto-track. cache: true # caches modules + build cache automatically - run: go vet ./... - run: go test -race -coverprofile=cover.out ./... build-push: needs: test if: github.ref == 'refs/heads/main' runs-on: ubuntu-latest steps: - uses: actions/checkout@v4 - uses: docker/setup-buildx-action@v3 - uses: docker/login-action@v3 with: registry: ghcr.io username: ${{ github.actor }} password: ${{ secrets.GITHUB_TOKEN }} - id: meta uses: docker/metadata-action@v5 with: images: ghcr.io/${{ github.repository }} tags: | type=sha,prefix=,format=short type=semver,pattern={{version}} type=raw,value=latest,enable={{is_default_branch}} - uses: docker/build-push-action@v5 with: context: . push: true tags: ${{ steps.meta.outputs.tags }} labels: ${{ steps.meta.outputs.labels }} cache-from: type=gha cache-to: type=gha,mode=max deploy: needs: build-push runs-on: ubuntu-latest steps: - uses: actions/checkout@v4 - uses: aws-actions/configure-aws-credentials@v4 with: role-to-assume: arn:aws:iam::123456789012:role/gha-deployer aws-region: us-east-1 - run: aws eks update-kubeconfig --name prod-cluster - run: | helm upgrade --install gofr-api ./chart \ --set image.tag=${{ github.sha }} \ --wait --timeout 5m ``` Key points: - `actions/setup-go` with `cache: true` caches `~/go/pkg/mod` and the build cache between runs. - `concurrency` cancels superseded runs on the same branch — prevents two deploys from racing. - The `id-token: write` permission plus `aws-actions/configure-aws-credentials` uses GitHub OIDC to assume an IAM role with no long-lived keys. ## Image tagging strategy Pick a tag scheme that gives you both *traceability* and *promotability*: - `git-sha-short` (e.g., `a1b2c3d`) for every commit — unique, immutable, easy to roll back to. - Semver (`1.4.2`) on tagged releases for human-friendly references and Helm chart values. - `latest` only on the default branch and never used in production manifests — it makes rollbacks ambiguous. Helm values should pin to the SHA tag, not `latest`. ## Secrets in CI In order of preference: 1. **OIDC** to AWS / GCP / Azure / Vault — no static secret stored in CI. 2. Encrypted CI variables scoped to a single environment. 3. Long-lived API tokens — last resort, rotate often. Never echo secrets into logs. Mask them by setting them as masked variables. ## Database migrations Run migrations as part of deploy, not as part of the image build. See [DB Migrations in CI/CD](/docs/guides/db-migrations-in-cicd) for the Helm pre-install hook and Job patterns. ## GitLab CI The shape is identical: a `test` job, a `build` job using `kaniko` or `buildah`, and a `deploy` job using `helm`. Use GitLab's OIDC support (`CI_JOB_JWT_V2`) for cloud auth. ```yaml build: image: gcr.io/kaniko-project/executor:debug script: - /kaniko/executor --dockerfile=Dockerfile --destination=$CI_REGISTRY_IMAGE:$CI_COMMIT_SHORT_SHA ``` ## CircleCI Use orbs (`circleci/aws-cli`, `circleci/kubernetes`) and CircleCI's OIDC tokens (`oidc_token`) to assume cloud roles. Cache Go with `restore_cache` keyed on `go.sum`. ## Gotchas - A failed migration should fail the deploy. Always set `--wait` on Helm and a non-zero exit on the migration Job. - Never run integration tests against a shared production database. Spin up an ephemeral DB in the CI runner. - The Go test race detector (`-race`) catches subtle data races in handlers — keep it on for the unit test stage. - If you build a single multi-arch image, use `docker/build-push-action` with `platforms: linux/amd64,linux/arm64`. {% faq %} {% faq-item question="Should I run go test inside Docker or on the runner?" %} On the runner. It is faster (Go module cache reuse) and gives clearer logs. The Docker build only happens once tests pass. {% /faq-item %} {% faq-item question="How do I authenticate to AWS or GCP from GitHub Actions without storing keys?" %} Use OIDC. Configure a trust policy on the cloud role that trusts GitHub's OIDC issuer, then use `aws-actions/configure-aws-credentials` or `google-github-actions/auth` with the `id-token: write` permission. {% /faq-item %} {% faq-item question="What image tag should production manifests reference?" %} The short Git SHA. It is unique, immutable, and makes rollbacks unambiguous. Reserve `latest` for development environments only. {% /faq-item %} {% /faq %} --- ## https://gofr.dev/docs/guides/cloud-deployment # Cloud Deployment {% answer %} A GoFr container is a stock Linux Go binary listening on `HTTP_PORT` (default 8000), `GRPC_PORT` (default 9000), and `METRICS_PORT` (default 2121). It runs unchanged on EKS, GKE, and AKS — what differs is the Ingress controller, the LoadBalancer flavor, and how the pod gets credentials for managed datasources. {% /answer %} ## Common Kubernetes shape Regardless of cloud, your pod exposes: - `8000` — HTTP API (overridable via `HTTP_PORT`) - `9000` — gRPC (overridable via `GRPC_PORT`) - `2121` — Prometheus metrics (overridable via `METRICS_PORT`; set to `0` to disable) - `/.well-known/alive` — liveness - `/.well-known/health` — readiness (covers dependencies) These defaults are confirmed in `pkg/gofr/default.go` and `pkg/gofr/factory.go`. ## AWS EKS **Cluster bring-up.** Minimal `eksctl` invocation: ```bash eksctl create cluster \ --name \ --region \ --nodegroup-name \ --node-type t3.medium \ --nodes 2 \ --managed ``` **Ingress controller install.** Install the AWS Load Balancer Controller via Helm (after IRSA is set up for the controller's ServiceAccount — see the controller's install docs for the IAM policy and ServiceAccount creation): ```bash helm install aws-load-balancer-controller eks/aws-load-balancer-controller \ -n kube-system \ --set clusterName= \ --set serviceAccount.create=false \ --set serviceAccount.name=aws-load-balancer-controller ``` **Ingress.** Use the AWS Load Balancer Controller, which provisions an ALB from an `Ingress` resource: ```yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: gofr-api annotations: alb.ingress.kubernetes.io/scheme: internet-facing alb.ingress.kubernetes.io/target-type: ip alb.ingress.kubernetes.io/healthcheck-path: /.well-known/alive spec: ingressClassName: alb rules: [...] ``` For raw TCP (e.g., gRPC) prefer a `Service: type=LoadBalancer` annotated with `service.beta.kubernetes.io/aws-load-balancer-type: nlb`. ALB is L7-only. **IAM.** Use IRSA (IAM Roles for Service Accounts). Annotate the ServiceAccount with `eks.amazonaws.com/role-arn: arn:aws:iam::...`. The AWS SDK inside any GoFr S3 / SQS / SNS datasource will pick the credentials up automatically. Do not bake static keys into env vars. ```yaml apiVersion: v1 kind: ServiceAccount metadata: name: orders namespace: default annotations: eks.amazonaws.com/role-arn: arn:aws:iam:::role/ ``` **Datasources.** RDS for PostgreSQL/MySQL works directly with GoFr's SQL driver — set `DB_HOST`, `DB_PORT`, etc. ElastiCache Redis works via the Redis datasource. Aurora's failover is handled by the cluster endpoint; GoFr will reconnect on failure. **Persistent storage.** Only relevant if you use the `local` file storage driver. Use an EBS-backed `PersistentVolumeClaim`. For multi-AZ, switch to S3 file storage instead. **Smoke test.** ```bash # Apply the manifests from /docs/guides/deploying-to-kubernetes kubectl apply -f k8s/ kubectl wait --for=condition=available --timeout=120s deployment/orders kubectl port-forward svc/orders 8080:80 & curl -s http://localhost:8080/.well-known/health ``` **Cost note.** A 2-node `t3.medium` managed node group is the smallest commonly-used EKS shape; you also pay the EKS control plane hourly fee per cluster. Expect a low-double-digit USD/day for an idle cluster at list price, before egress, NAT Gateway, EBS, ALB hours, and CloudWatch ingestion. Use the [AWS Pricing Calculator](https://calculator.aws/) for an account-accurate figure. For canonical syntax see: `https://kubernetes-sigs.github.io/aws-load-balancer-controller/`. ## GCP GKE **Cluster bring-up.** Minimal `gcloud` invocation: ```bash gcloud container clusters create \ --region \ --num-nodes 2 \ --machine-type e2-standard-2 \ --release-channel regular ``` **Ingress controller install.** GKE ships the GCE Ingress controller enabled by default — no install step needed. If you'd rather use NGINX or another controller, install it the same way you would on any cluster. **Ingress.** GKE has a built-in GCE Ingress controller that provisions an HTTP(S) Load Balancer: ```yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: gofr-api spec: ingressClassName: gce rules: [...] ``` For container-native load balancing (recommended), expose the Service as `type: ClusterIP` with the `cloud.google.com/neg: '{"ingress": true}'` annotation. **LoadBalancer tier.** A `Service: type=LoadBalancer` defaults to the Premium network tier. To force Standard, set `cloud.google.com/network-tier: Standard`. Premium gives lower latency but higher cost. **IAM.** Use Workload Identity. Bind a Kubernetes ServiceAccount to a Google IAM service account with `iam.gke.io/gcp-service-account` annotation. GoFr's GCS file storage and Pub/Sub datasources will use those credentials. ```yaml apiVersion: v1 kind: ServiceAccount metadata: name: orders namespace: default annotations: iam.gke.io/gcp-service-account: @.iam.gserviceaccount.com ``` **Datasources.** Cloud SQL: connect via the Cloud SQL Auth Proxy as a sidecar, or use Private IP and a VPC-native cluster. For Memorystore Redis, set `REDIS_HOST` to the private IP. **Persistent storage.** `pd-ssd` PersistentDisk works for the local file storage driver. PD is zonal — use Regional PD for multi-zone availability. **Smoke test.** ```bash # Apply the manifests from /docs/guides/deploying-to-kubernetes kubectl apply -f k8s/ kubectl wait --for=condition=available --timeout=120s deployment/orders kubectl port-forward svc/orders 8080:80 & curl -s http://localhost:8080/.well-known/health ``` **Cost note.** Two `e2-standard-2` nodes is a common minimal shape; GKE Standard also charges a per-cluster management fee (Autopilot bills differently — per-pod). Idle list-price cost is typically a low-double-digit USD/day before egress, NAT, persistent disks, LB forwarding rules, and Cloud Logging ingestion. The [Google Cloud Pricing Calculator](https://cloud.google.com/products/calculator) gives an account-accurate figure. Canonical docs: `https://cloud.google.com/kubernetes-engine/docs/concepts/ingress`. ## Azure AKS **Cluster bring-up.** Minimal `az` invocation: ```bash az aks create \ --resource-group \ --name \ --node-count 2 \ --node-vm-size Standard_DS2_v2 \ --enable-managed-identity \ --generate-ssh-keys ``` **Ingress controller install.** Either install NGINX via Helm: ```bash helm install ingress-nginx ingress-nginx/ingress-nginx \ -n ingress-nginx \ --create-namespace ``` or enable the Application Gateway Ingress Controller (AGIC) for an Azure-native L7 path — see Microsoft's AGIC install docs for the exact flags, which depend on whether you bring your own Application Gateway or let AKS create one. **Ingress.** Two common options: - Application Gateway Ingress Controller (AGIC) — uses an Azure Application Gateway, integrates with WAF. - NGINX Ingress Controller — vendor-neutral, runs anywhere. AGIC sample (modern form): set `spec.ingressClassName: azure-application-gateway` on the Ingress instead of using the deprecated `kubernetes.io/ingress.class` annotation. **LoadBalancer.** A `Service: type=LoadBalancer` provisions an Azure Standard Load Balancer by default. For internal-only, add `service.beta.kubernetes.io/azure-load-balancer-internal: "true"`. **IAM.** Use AKS Managed Identity with workload federation. Bind a UserAssignedIdentity via federated credentials. The Azure SDK in GoFr's Azure file storage and Event Hub datasources picks them up automatically. ```yaml apiVersion: v1 kind: ServiceAccount metadata: name: orders namespace: default annotations: azure.workload.identity/client-id: ``` **Datasources.** Azure Database for PostgreSQL/MySQL works with GoFr's SQL driver over private endpoints. Azure Cache for Redis works via the Redis datasource. **Persistent storage.** Azure Disk (`managed-csi`) for single-zone; Azure Files (CSI) when you need ReadWriteMany. **Smoke test.** ```bash # Apply the manifests from /docs/guides/deploying-to-kubernetes kubectl apply -f k8s/ kubectl wait --for=condition=available --timeout=120s deployment/orders kubectl port-forward svc/orders 8080:80 & curl -s http://localhost:8080/.well-known/health ``` **Cost note.** Two `Standard_DS2_v2` nodes is a common minimal AKS shape; the AKS control plane is free in the Free tier (paid in Standard/Premium tiers, which add an SLA fee per cluster-hour). Expect a low-double-digit USD/day at list price for an idle cluster before egress, public IPs, managed disks, Application Gateway hours, and Log Analytics ingestion. Use the [Azure Pricing Calculator](https://azure.microsoft.com/pricing/calculator/) for an account-accurate figure. Canonical docs: `https://learn.microsoft.com/azure/aks/`. ## Common gotchas - Set `terminationGracePeriodSeconds` longer than your slowest in-flight request so GoFr's graceful shutdown can drain. - If you front gRPC with an L7 LB (ALB, GCE), confirm HTTP/2 end-to-end — ALB needs `BackendProtocolVersion=GRPC`. - Use cloud-native logging (CloudWatch / Cloud Logging / Azure Monitor) only after confirming GoFr's structured JSON logs are not double-parsed. {% faq %} {% faq-item question="Do I need cloud-specific code for GoFr to run on EKS, GKE, or AKS?" %} No. GoFr runs the same binary everywhere. Cloud differences live in the Kubernetes manifests (Ingress, IAM bindings) and in connection strings for managed datasources. {% /faq-item %} {% faq-item question="Should I use static cloud credentials in environment variables?" %} No. Use IRSA on EKS, Workload Identity on GKE, and Managed Identity on AKS. The cloud SDKs that GoFr's datasources sit on top of will pick up the credentials automatically. {% /faq-item %} {% faq-item question="Which Ingress controller is required for GoFr?" %} None — GoFr does not depend on a specific Ingress. ALB (EKS), GCE (GKE), AGIC or NGINX (AKS) all work as long as they can route to port 8000. {% /faq-item %} {% /faq %} --- ## https://gofr.dev/docs/guides/connection-pooling # Connection Pooling {% answer %} GoFr exposes per-datasource pool knobs as environment variables — `DB_MAX_OPEN_CONNECTION` and `DB_MAX_IDLE_CONNECTION` for SQL, plus a programmatic `service.ConnectionPoolConfig` for outbound HTTP calls. On Kubernetes the rule that matters is `replicas × per_pod_pool ≤ database max_connections`. Size pools using the formula `target_connections = peak_qps × p99_latency_seconds` so each pod has just enough capacity to absorb its share of traffic. {% /answer %} ## When to use Default pool sizes are tuned for development. Under production load on Kubernetes you will hit one of two failure modes: pool exhaustion (request hangs waiting for a connection) or database overload (`FATAL: too many connections` from PostgreSQL, `Too many connections` from MySQL). Both are operational, not code, problems — they're solved by sizing the pool against measured traffic and the database's hard ceiling. ## SQL connection pool GoFr reads two env vars in `pkg/gofr/datasource/sql` and applies them with `database/sql`: | Env var | Default | Behavior | |---|---|---| | `DB_MAX_OPEN_CONNECTION` | `0` (unlimited) | Maps to `SetMaxOpenConns` | | `DB_MAX_IDLE_CONNECTION` | `2` | Maps to `SetMaxIdleConns` | ```dotenv DB_DIALECT=postgres DB_HOST=postgres-primary DB_PORT=5432 DB_NAME=orders DB_USER=orders_app DB_PASSWORD=... DB_MAX_OPEN_CONNECTION=20 DB_MAX_IDLE_CONNECTION=5 ``` GoFr does not currently expose a knob for `SetConnMaxLifetime` / `SetConnMaxIdleTime`; rely on the database's own idle-timeout to recycle stale connections. For replica reads, the framework also recognizes `DB_REPLICA_*` variables (hosts, ports, users, passwords, plus pool sizing). See [GoFr Configuration Options](/docs/references/configs) for the full list. ### Sizing math Start from measured load: ```text target_connections_per_pod = ceil(peak_qps_per_pod × p99_query_latency_seconds) ``` A pod serving 200 QPS with 50 ms P99 query latency needs `200 × 0.05 = 10` connections to keep the queue empty. Add 50% headroom for spikes → `DB_MAX_OPEN_CONNECTION=15`. Then verify against the database ceiling: ```text total = replicas × DB_MAX_OPEN_CONNECTION + other_consumers total ≤ database.max_connections × 0.8 # leave 20% for admins, migrations ``` For PostgreSQL with `max_connections=200`, 10 replicas, `DB_MAX_OPEN_CONNECTION=15` → 150 connections, fits inside the 80% budget. If your replica count outgrows this, put PgBouncer in front and have GoFr connect through it. ### Symptoms of exhaustion - Latency suddenly spikes for *some* requests while others stay normal — handlers blocked on `db.Conn`. - Logs show repeated `dial tcp ...: i/o timeout` after a deploy that increased replica count. - Postgres `pg_stat_activity` count is at or near `max_connections`. GoFr's SQL datasource exports two pool gauges: `app_sql_open_connections` (total connections in the pool) and `app_sql_inUse_connections` (connections currently checked out by a query). Alert when `app_sql_inUse_connections` is at or near the `MaxOpenConns` ceiling for sustained periods — that's the saturation signal. There is no separate idle-connections gauge; idle = `open - inUse`, derive it in PromQL if you need to track it. ## Redis, Mongo, and other datasources Each datasource driver in GoFr has its own defaults — see the [configuration reference](/docs/references/configs) for the env vars that exist for Redis, MongoDB, Cassandra, ClickHouse, and others. The same `replicas × per_pod_pool ≤ server_max` rule applies: a Redis cluster with `maxclients=10000` and 50 pods leaves 200 connections per pod. ## Outbound HTTP service pool Service-to-service calls share connections through Go's `http.Transport`. GoFr exposes this as `service.ConnectionPoolConfig` on `AddHTTPService`: ```go import ( "time" "gofr.dev/pkg/gofr" "gofr.dev/pkg/gofr/service" ) func main() { app := gofr.New() app.AddHTTPService("payments", "https://payments.internal", &service.ConnectionPoolConfig{ MaxIdleConns: 100, MaxIdleConnsPerHost: 20, IdleConnTimeout: 90 * time.Second, }, &service.CircuitBreakerConfig{Threshold: 4, Interval: 5 * time.Second}, &service.RetryConfig{MaxRetries: 3}, ) app.Run() } ``` One pitfall to know: - **Go's default `MaxIdleConnsPerHost` is 2.** That's almost always too low for a microservice talking to one downstream — bump it to 10–20 for typical traffic, higher for chatty services. You can pass `ConnectionPoolConfig` in any position among the `AddHTTPService` options. The framework's `extractHTTPService` helper recursively unwraps the circuit-breaker, retry, and auth wrappers when applying pool config, so option order does not matter. ## Verification ```bash # SQL — Postgres psql -c "SELECT count(*) FROM pg_stat_activity WHERE datname='orders';" # In-cluster — SQL pool gauges and Redis command-latency histogram via /metrics # (GoFr exposes app_sql_open_connections + app_sql_inUse_connections gauges for SQL, # and app_sql_stats / app_redis_stats histograms for command latency. There is no # dedicated Redis pool gauge — observe pool pressure via app_redis_stats latency.) curl http://orders-api.prod:2121/metrics | grep -E 'app_sql_(open|inUse)_connections|app_(sql|redis)_stats' ``` After a deploy, watch the connection count climb to roughly `replicas × DB_MAX_OPEN_CONNECTION` under steady load. If it exceeds that, something is bypassing the framework's datasource (e.g., a hand-rolled `sql.Open`). {% faq %} {% faq-item question="What are the exact env vars for GoFr's SQL pool?" %} `DB_MAX_OPEN_CONNECTION` (default 0 = unlimited) and `DB_MAX_IDLE_CONNECTION` (default 2). They map to `SetMaxOpenConns` and `SetMaxIdleConns` respectively. {% /faq-item %} {% faq-item question="Why is my HTTP client only using 2 connections per host?" %} That's Go's `DefaultMaxIdleConnsPerHost`. Pass a `service.ConnectionPoolConfig` with `MaxIdleConnsPerHost: 20` (in any position among the `AddHTTPService` options — the framework unwraps circuit-breaker/retry/auth wrappers regardless of order). {% /faq-item %} {% faq-item question="Should I cap DB_MAX_OPEN_CONNECTION on every service?" %} Yes. The default of `0` (unlimited) is fine for local development but lets a single misbehaving pod exhaust the database. Always set an explicit limit in production. {% /faq-item %} {% /faq %} --- ## https://gofr.dev/docs/guides/db-migrations-in-cicd # DB Migrations in CI/CD {% answer %} GoFr's built-in migrations run on app start, coordinated by a distributed lock so multi-replica deploys are safe. In CI/CD you have two clean choices: let the app run them on startup, or run them as a separate Helm pre-upgrade Job. The Job pattern is generally preferable because it fails fast, has its own logs, and gates the rollout. {% /answer %} {% howto name="Run GoFr database migrations in CI/CD" description="Pick an in-process startup pattern (default) or a separate cmd/migrate binary triggered by a Helm pre-upgrade Job." steps=[{"name": "Register migrations", "text": "Call app.Migrate(map[int64]migration.Migrate{...}) before app.Run — GoFr applies them synchronously on every replica startup."}, {"name": "Trust the distributed lock", "text": "GoFr uses gofr_migration_locks plus a gofr_migrations version table so only one replica runs each migration; others observe and no-op."}, {"name": "Choose the deployment pattern", "text": "For small services, in-process is enough. For multi-replica prod, build a separate cmd/migrate Go binary and run it as a Helm pre-upgrade Job."}, {"name": "Build the migrate binary", "text": "Create cmd/migrate/main.go that calls gofr.New, registers migrations, calls app.Migrate, and exits without ever calling app.Run."}, {"name": "Wire the Helm pre-upgrade Job", "text": "Create a Job manifest with helm.sh/hook: pre-install,pre-upgrade and command pointing at the migrate binary, using the same image as the Deployment."}, {"name": "Verify on rollout", "text": "After helm upgrade, kubectl logs job/-migrate to confirm the migration ran; query gofr_migrations table to see applied versions."}] /%} ## What GoFr provides GoFr ships a migration system that you wire up via `app.Migrate(migrations.All())`. It supports MySQL, PostgreSQL, Redis, ClickHouse, Cassandra, and Elasticsearch. Records are kept in a `gofr_migrations` table (or Redis hash). A distributed lock (`gofr_migration_locks` table or Redis `SETNX`) prevents two replicas from running the same migration concurrently — see [Handling Data Migrations](/docs/advanced-guide/handling-data-migrations) for the full mechanics. That means *any* deployment shape works correctness-wise. The CI/CD question is operational: do you want migrations tied to app startup, or separated? ## Option A: Migrations on app start (default) The simplest setup, and the default GoFr lifecycle. Every replica calls `app.Migrate(...)` in-process before serving traffic — there is no separate migration binary or subcommand. The first replica to acquire the lock runs the migration; the others observe the populated `gofr_migrations` table and no-op. After migrations finish, all replicas continue normal startup. Pros: - Zero extra infra. One artifact per service. - Migrations cannot drift from code — they ship in the same image. - Idempotent under concurrency: the lock plus the version table guarantee each migration runs exactly once across replicas. Cons: - A migration error fails the readiness probe of every replica simultaneously, which can take down healthy old pods if the rollout strategy isn't careful. - Slow migrations delay every pod's start. - Logs are mixed with normal application logs. Use this for small services and early-stage projects. ## Option B: Separate `cmd/migrate` binary as a Helm pre-upgrade Job For multi-replica production services, run migrations as a Kubernetes Job triggered by Helm before the Deployment rolls forward. **There is no built-in `gofr migrate` CLI or `MIGRATE_ONLY` env mode in the framework.** Instead, organize your application as two binaries built from the same Go module: the serving binary (`cmd/server` or your existing `main.go`) and a small dedicated migration binary (`cmd/migrate`). The migration binary calls `gofr.New()`, registers migrations, calls `app.Migrate(...)` (which is synchronous and runs to completion before returning), and exits without ever calling `app.Run()`. This is application code organization — not a framework knob. ```go // cmd/migrate/main.go package main import ( "gofr.dev/pkg/gofr" "yourmodule/migrations" ) func main() { app := gofr.New() // app.Migrate runs migrations synchronously using GoFr's distributed lock // and returns once they have completed (or failed). No app.Run() — this // binary is intended to be invoked as a one-shot Job. app.Migrate(migrations.All()) } ``` Build it as a separate binary in the same image (or a slimmer migrate-only image), and invoke it from the Helm pre-upgrade Job: ```yaml apiVersion: batch/v1 kind: Job metadata: name: {{ .Release.Name }}-migrate-{{ .Values.image.tag | replace ":" "-" }} annotations: "helm.sh/hook": pre-install,pre-upgrade "helm.sh/hook-weight": "-5" "helm.sh/hook-delete-policy": before-hook-creation spec: backoffLimit: 0 template: spec: restartPolicy: Never containers: - name: migrate image: "{{ .Values.image.repo }}:{{ .Values.image.tag }}" # Invoke the dedicated migration binary built from cmd/migrate. command: ["/app/migrate"] envFrom: - secretRef: name: {{ .Release.Name }}-db ``` Pros: - Failed migration fails the Helm release atomically; the rollout never starts. - Job logs are clean and separately addressable: `kubectl logs job/...`. - Application pods see the new schema by the time they boot. - The migration binary is just Go code — no framework feature to learn beyond `app.Migrate`. Cons: - One more binary to build and template. The migrator Job's image SHA must always match the Deployment image SHA. Use this for production. ## Init container: usually not the right tool It is tempting to put migrations in an `initContainer`. Don't, in multi-replica deploys. Each pod's init container will race for the lock. GoFr's lock makes that *safe*, but it also means N-1 pods just wait for nothing while the rollout takes longer than necessary, and a failing migration manifests as N pods CrashLooping. A pre-upgrade Job centralizes the failure into one Pod and one log stream. Init containers are fine for single-replica services or local dev. ## Expand-contract for zero downtime Schema changes that break the previous app version are dangerous during a rolling deploy because both versions run simultaneously. Use the expand-contract pattern: 1. **Expand** — release migration A that *adds* the new column/table without removing the old one. Old code keeps working. 2. **Migrate code** — release the app version that writes both old and new, reads new (or vice versa). 3. **Backfill** — copy data from old to new in a background Job if needed. 4. **Contract** — once the app version is stable, release migration B that drops the old column/table. This typically means at least two deploys per breaking change. It is the price of zero-downtime. ## Rollback strategy GoFr currently runs migrations in `UP` mode only (verified against `pkg/gofr/migration` semantics described in the migrations doc). That has implications for rollback: - App-level rollback (image SHA): always safe if the schema change was expand-only. - Schema rollback: write a *new* forward migration that reverses the change. Treat the database as append-only history. - Snapshots before destructive migrations are a safety net for genuine emergencies. ## Idempotency Write migration SQL so re-running it is harmless: `CREATE TABLE IF NOT EXISTS`, `ALTER TABLE ... ADD COLUMN IF NOT EXISTS` (PostgreSQL), or guarded `IF` checks. The lock prevents concurrent runs, but idempotency protects against partial failures and manual re-runs. ## CI ergonomics - Run migrations against an ephemeral database in CI on every PR. If the migration fails in CI, it never reaches prod. - Tag the migration Job with the Helm release name and image SHA so old Jobs are identifiable: `migrate-{{ .Release.Name }}-{{ .Values.image.tag }}`. - Pin the database driver version in `go.mod` and treat upgrades as their own change. {% faq %} {% faq-item question="Are GoFr migrations safe to run from many replicas at once?" %} Yes. GoFr coordinates with a distributed lock — one replica runs, the others wait. See the multi-instance section in Handling Data Migrations. {% /faq-item %} {% faq-item question="Should I use a Helm pre-upgrade Job or let the app run migrations at startup?" %} For multi-replica production services, prefer the Job. It fails fast, has clean logs, and gates the rollout. App-startup migrations are fine for single-replica or small services. {% /faq-item %} {% faq-item question="How do I roll back a schema change?" %} Write a new forward migration that reverses it. Combine with the expand-contract pattern so each step is reversible and the previous app version keeps working. {% /faq-item %} {% /faq %} --- ## https://gofr.dev/docs/guides/deploying-to-kubernetes # Deploying GoFr to Kubernetes {% answer %} Deploy a GoFr service to Kubernetes by pointing the readiness probe at `/.well-known/health`, the liveness probe at `/.well-known/alive`, and feeding non-secret config through a ConfigMap (`envFrom`) and credentials through a Secret. Set `terminationGracePeriodSeconds` higher than the longest in-flight request so GoFr's graceful shutdown can drain cleanly. {% /answer %} {% howto name="Deploy a GoFr service to Kubernetes" description="Containerize, apply manifests, wire health probes, and tune graceful shutdown." steps=[{"name":"Containerize the binary","text":"Build a multi-stage Dockerfile that compiles the GoFr binary statically and runs it on distroless or alpine."},{"name":"Apply Deployment and Service","text":"Apply a Deployment with replicas and resource requests, plus a ClusterIP Service exposing port 8000."},{"name":"Wire health probes","text":"Set readinessProbe httpGet path to /.well-known/health and livenessProbe to /.well-known/alive on port 8000."},{"name":"Inject configuration","text":"Feed non-secret env via ConfigMap and credentials via Secret, both attached with envFrom on the container."},{"name":"Tune graceful shutdown","text":"Set terminationGracePeriodSeconds higher than your longest in-flight request so SIGTERM drains cleanly."},{"name":"Roll out and verify","text":"kubectl rollout status, then port-forward and curl /.well-known/health to confirm datasources are reachable."}] /%} ## When to use this guide You have a GoFr service already containerized (see {% new-tab-link newtab=false title="Dockerizing GoFr Services" href="/docs/guides/dockerizing-gofr-services" /%}) and a Kubernetes cluster (kind, EKS, GKE, AKS, or on-prem). This guide covers the manifest set for a stateless HTTP service: Deployment, Service, ConfigMap, Secret, and an optional HorizontalPodAutoscaler. ## How GoFr features map to Kubernetes resources | GoFr feature | Kubernetes object | Notes | |---|---|---| | `/.well-known/alive` | `livenessProbe.httpGet` | Restart unhealthy pods | | `/.well-known/health` | `readinessProbe.httpGet` | Gate traffic until datasources are reachable | | `OnStart` hooks | `startupProbe` | Long warm-ups (cache fill, migrations) | | Graceful shutdown on SIGTERM | `terminationGracePeriodSeconds` | Drain in-flight requests | | `configs/.env` keys | `ConfigMap` + `envFrom` | Non-secret config | | DB passwords, API keys | `Secret` + `envFrom` | Mount via env, not files | | `/metrics` (port 2121) | named container port | OpenMetrics/Prometheus text format — scraped by any compatible collector (see below) | ## Full manifest set The following manifests deploy a GoFr service named `orders` listening on HTTP `8000` and Prometheus `2121`. Save them in a `k8s/` directory and apply with `kubectl apply -f k8s/`. ### ConfigMap (non-secret config) ```yaml apiVersion: v1 kind: ConfigMap metadata: name: orders-config namespace: default data: APP_NAME: "orders" HTTP_PORT: "8000" METRICS_PORT: "2121" LOG_LEVEL: "INFO" TRACE_EXPORTER: "otlp" TRACER_URL: "otel-collector.observability.svc.cluster.local:4317" TRACER_RATIO: "0.1" REDIS_HOST: "redis.default.svc.cluster.local" REDIS_PORT: "6379" DB_HOST: "postgres.default.svc.cluster.local" DB_PORT: "5432" DB_NAME: "orders" DB_DIALECT: "postgres" ``` These keys are read by GoFr from environment variables — the same names you use in `configs/.env` locally. ### Secret (credentials) ```yaml apiVersion: v1 kind: Secret metadata: name: orders-secret namespace: default type: Opaque stringData: DB_USER: "orders_app" DB_PASSWORD: "change-me" REDIS_PASSWORD: "change-me" ``` For real clusters, generate this with `kubectl create secret generic ... --from-literal=...` or use an external secrets operator (Vault, AWS Secrets Manager, etc.). Never commit populated Secret YAML. ### Deployment ```yaml apiVersion: apps/v1 kind: Deployment metadata: name: orders namespace: default labels: app.kubernetes.io/name: orders spec: replicas: 3 revisionHistoryLimit: 5 strategy: type: RollingUpdate rollingUpdate: maxSurge: 25% maxUnavailable: 0 selector: matchLabels: app.kubernetes.io/name: orders template: metadata: labels: app.kubernetes.io/name: orders spec: terminationGracePeriodSeconds: 45 securityContext: runAsNonRoot: true runAsUser: 65532 seccompProfile: type: RuntimeDefault containers: - name: orders image: my-org/orders:1.4.2 imagePullPolicy: IfNotPresent ports: - name: http containerPort: 8000 protocol: TCP - name: metrics containerPort: 2121 protocol: TCP envFrom: - configMapRef: name: orders-config - secretRef: name: orders-secret resources: requests: cpu: "200m" memory: "256Mi" limits: cpu: "1" memory: "512Mi" livenessProbe: httpGet: path: /.well-known/alive port: http initialDelaySeconds: 5 periodSeconds: 10 timeoutSeconds: 2 failureThreshold: 3 readinessProbe: httpGet: path: /.well-known/health port: http initialDelaySeconds: 5 periodSeconds: 5 timeoutSeconds: 2 failureThreshold: 3 startupProbe: httpGet: path: /.well-known/alive port: http failureThreshold: 30 periodSeconds: 2 securityContext: allowPrivilegeEscalation: false readOnlyRootFilesystem: true capabilities: drop: ["ALL"] ``` The resource requests/limits above (`200m` / `256Mi` request, `1` / `512Mi` limit) are reasonable starting points for a small CRUD service, **not** a prescription. Profile your service under realistic load and adjust — a service that fans out to many datasources will use more memory; a CPU-bound JSON-heavy API may need a higher CPU limit. ### Service ```yaml apiVersion: v1 kind: Service metadata: name: orders namespace: default labels: app.kubernetes.io/name: orders spec: type: ClusterIP selector: app.kubernetes.io/name: orders ports: - name: http port: 80 targetPort: http protocol: TCP - name: metrics port: 2121 targetPort: metrics protocol: TCP ``` Naming the metrics port `metrics` lets any OpenMetrics scraper (Prometheus `ServiceMonitor`, Grafana Alloy, OpenTelemetry Collector, VictoriaMetrics `VMServiceScrape`, Datadog Agent, etc.) select it by name without hardcoding `2121`. ### HorizontalPodAutoscaler (optional) ```yaml apiVersion: autoscaling/v2 kind: HorizontalPodAutoscaler metadata: name: orders namespace: default spec: scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: orders minReplicas: 3 maxReplicas: 20 metrics: - type: Resource resource: name: cpu target: type: Utilization averageUtilization: 70 ``` For traffic-driven scaling, switch to a custom-metrics adapter against the `app_http_response` histogram GoFr exports (request rate or p95 latency). ## Scraping metrics GoFr's `/metrics` endpoint exposes the OpenMetrics text format (Prometheus-compatible). Any collector that speaks OpenMetrics can scrape it — pick the one that matches your platform: - **Prometheus + ServiceMonitor** (kube-prometheus-stack) — the most common path. The Prometheus Operator's `ServiceMonitor` CRD selects services by label and scrapes the named `metrics` port. See {% new-tab-link newtab=false title="Production Prometheus on Kubernetes" href="/docs/guides/production-prometheus-kubernetes" /%}. - **Grafana Alloy** (the unified Grafana agent that supersedes Grafana Agent) — has a `prometheus.scrape` component that targets the same endpoint and can forward to Mimir, Cortex, or Cloud. - **OpenTelemetry Collector** — the `prometheus` receiver scrapes OpenMetrics endpoints; pair with an OTLP exporter to push metrics to Jaeger/Tempo-paired backends or any OTLP-aware vendor. - **VictoriaMetrics Operator** — uses `VMServiceScrape` (analogous to ServiceMonitor) if you've replaced Prometheus with VictoriaMetrics. - **Datadog Agent** — the OpenMetrics check (`openmetrics` integration) scrapes the same endpoint when you set the relevant pod annotations. GoFr does not ship config for any of these collectors — pick one and follow its install docs. The only contract on the GoFr side is the `/metrics` endpoint on `METRICS_PORT` (default 2121), which is named `metrics` in the Service so any of these scrapers can target it by name. ## Probes: why `/.well-known/health` for readiness, `/.well-known/alive` for liveness? Both endpoints are registered automatically by GoFr (see {% new-tab-link newtab=false title="Monitoring Service Health" href="/docs/advanced-guide/monitoring-service-health" /%}). - **`/.well-known/alive`** returns 200 as long as the HTTP server is up. A failure means "the process is wedged — restart me." That maps to *liveness*. - **`/.well-known/health`** returns 200 only when the service **and its dependencies** are reachable. A failure here means "I'm up but I can't serve traffic right now — stop sending it." That maps to *readiness*. Using `/.well-known/health` for liveness is a common mistake: a transient Redis outage will then cause kubelet to restart pods in a loop, taking the service fully offline. ## Graceful shutdown When Kubernetes terminates a pod it sends `SIGTERM`, removes the pod from the Service endpoints, and waits up to `terminationGracePeriodSeconds` before sending `SIGKILL`. GoFr's `app.Run()` listens for `SIGINT` and `SIGTERM` and stops accepting new requests while letting in-flight ones finish. Set `terminationGracePeriodSeconds` to slightly more than your longest realistic request — `45` is a safe default for typical APIs; bump it for services that stream or batch. If you have `OnStart` warm-up logic, see {% new-tab-link newtab=false title="Startup Hooks" href="/docs/advanced-guide/startup-hooks" /%}. ## Production tips - **`maxUnavailable: 0`** during rollouts is safer than the default `25%` — combined with `maxSurge: 25%`, you get zero-downtime deploys at the cost of one extra pod's worth of resources. - **Pin image tags** to a SHA or semantic version. `:latest` will not roll the Deployment when you push a new image. - **PodDisruptionBudget** with `minAvailable: 2` (or `maxUnavailable: 1`) protects you during node drains. - **Don't put `/metrics` behind authentication** in-cluster — Prometheus must scrape it, and `NetworkPolicy` is a cleaner control. - **Tracing sampling:** in production, `TRACER_RATIO=0.1` (10%) is a sensible starting point. See {% new-tab-link newtab=false title="Production Tracing" href="/docs/guides/production-tracing" /%}. ## Verification ```bash kubectl apply -f k8s/ # Wait for rollout. kubectl rollout status deployment/orders --timeout=120s # Inspect probe state. kubectl get pods -l app.kubernetes.io/name=orders kubectl describe pod | grep -A2 -E "Liveness|Readiness|Startup" # Hit the endpoints from inside the cluster. kubectl run curl --rm -it --image=curlimages/curl --restart=Never -- \ curl -s http://orders.default.svc.cluster.local/.well-known/health # Or port-forward for local poking. kubectl port-forward svc/orders 8080:80 2121:2121 curl -s http://localhost:8080/.well-known/health curl -s http://localhost:2121/metrics | head ``` {% faq %} {% faq-item question="My pod is CrashLoopBackOff right after deploy — how do I tell if it's a probe issue?" %} `kubectl describe pod` shows the last container exit reason and recent probe failures. If liveness fired before the app finished initializing, raise `startupProbe.failureThreshold` (each unit is `periodSeconds`, so `30 * 2s = 60s` of grace). If readiness keeps failing, port-forward to the pod and `curl /.well-known/health` directly — the JSON body lists which dependency is down. {% /faq-item %} {% faq-item question="Should I run the metrics server on the same port as HTTP?" %} GoFr binds metrics on `METRICS_PORT` (default `2121`) separately from `HTTP_PORT` (default `8000`). Keep them split so you can apply different `NetworkPolicy` rules — for example, only allow Prometheus to reach `2121`. {% /faq-item %} {% faq-item question="How do I roll over secrets without downtime?" %} Update the Secret, then trigger a rollout with `kubectl rollout restart deployment/orders`. Pods come back with the new env values via `envFrom`. For automatic reload on Secret change, use a tool like Reloader, since `envFrom` doesn't auto-update running containers. {% /faq-item %} {% /faq %} --- ## https://gofr.dev/docs/guides/distributed-tracing # Distributed Tracing {% answer %} GoFr uses OpenTelemetry with W3C TraceContext + Baggage propagators by default. Inbound HTTP requests have their `traceparent` extracted, and GoFr's outbound HTTP service client injects the same headers downstream — so a request crossing five GoFr services shows up as one trace in Jaeger when they all export to the same backend. {% /answer %} ## What GoFr propagates OpenTelemetry setup in GoFr (verified in `pkg/gofr/otel.go`) configures: ```go otel.SetTextMapPropagator(propagation.NewCompositeTextMapPropagator( propagation.TraceContext{}, propagation.Baggage{})) ``` That means every GoFr binary, on boot, registers two propagators: - **W3C TraceContext** — the standard `traceparent` and `tracestate` headers. - **W3C Baggage** — the `baggage` header for cross-service key/value context. The HTTP server middleware extracts both on every inbound request (`pkg/gofr/http/middleware/tracer.go`). The HTTP service client injects both on every outbound request (`pkg/gofr/service/new.go`). ## Trace ID format W3C trace IDs are 16-byte (32 hex character) values; span IDs are 8-byte (16 hex). When a request carries a trace context, GoFr's logger writes the trace ID to the top-level `trace_id` field on the JSON log envelope (the field is `omitempty`, so it only appears when a trace context is set). On HTTP request logs specifically, the `message` field is itself a `RequestLog` struct that carries its own nested `trace_id` and `span_id` alongside `method`, `uri`, etc. — that nested copy is only present on the HTTP middleware's request log line, not on every log entry. Correlate logs and traces using either occurrence (see [Production Logging](/docs/guides/production-logging) for the exact log shape and shipper configuration). A trace looks the same across HTTP and gRPC: the same trace ID, with each service contributing one or more spans. ## Configuration Tracing is opt-in. Set: | Env var | Purpose | Notes | |------------------|------------------------------------------------------|------------------------------------------| | `TRACE_EXPORTER` | `otlp`, `jaeger`, `zipkin` (deprecated) | Required to enable | | `TRACER_URL` | Endpoint URL or `host:port` | Required when exporter is set | | `TRACER_RATIO` | Sample ratio (0.0–1.0) | Defaults to `1` (100%) | | `TRACER_HEADERS` | Custom headers (e.g., for SaaS auth) | Comma-separated `key=value` pairs | | `TRACER_AUTH_KEY`| Single auth header value | Use `TRACER_HEADERS` for multiple | The `zipkin` value emits a deprecation warning at startup and recommends switching to `otlp` (verified in `pkg/gofr/otel.go`). ## End-to-end example Service A receives an HTTP request, calls Service B over HTTP, which writes to a database. With GoFr defaults, the trace contains: 1. Server span on A (from the HTTP middleware). 2. Custom application spans on A if you call `c.Trace("step-name")` — see [Custom Spans in Tracing](/docs/advanced-guide/custom-spans-in-tracing). 3. Client span on A's outbound HTTP call to B. 4. Server span on B from the same `traceparent`. 5. Spans for B's database call (when using GoFr's instrumented datasources). Both A and B point `TRACE_EXPORTER=otlp` and `TRACER_URL` at the same collector. In Jaeger or Tempo, a single search by trace ID shows the whole path. ## gRPC The gRPC server in GoFr also participates in tracing. Cross-protocol traces — HTTP into gRPC and back — work because both servers use the same OpenTelemetry SDK and propagators. ## Pub/Sub Trace propagation through a message bus is partial. The Google Pub/Sub datasource injects trace context into message attributes (verified in `pkg/gofr/datasource/pubsub/google/tracing.go`), which means a producer span and consumer span share the same trace ID. Other Pub/Sub backends (Kafka, NATS, SQS, MQTT, EventHub) may or may not propagate trace context end-to-end; if your span graph breaks at the bus, log the trace ID into the message payload manually as a fallback so downstream logs can still be correlated. ## Sampling: keep it consistent If A samples at 10% and B samples at 100%, you'll have lots of B-only traces with no parent — useless. Two rules: - Use head-based sampling (`TRACER_RATIO`) consistently across all services in the request path. - Or use tail-based sampling at the collector level, where the decision happens after the trace is assembled. For a typical OTLP collector setup, configure tail sampling once in the collector and set `TRACER_RATIO=1` in every service so all spans are exported and the collector decides what to keep. ## Visualizing in Jaeger Run Jaeger in OTLP-receiver mode, point GoFr at the gRPC OTLP endpoint (typically `host:4317`), and traces appear in the Jaeger UI within a second or two. ```dotenv TRACE_EXPORTER=otlp TRACER_URL=jaeger.observability.svc.cluster.local:4317 TRACER_RATIO=1 ``` For Tempo or Honeycomb, point at their OTLP gRPC endpoints and add `TRACER_HEADERS` for any required auth. ## Custom application spans For business-level operations inside a handler, wrap them with `c.Trace("name")`. This is the recommended way to add span granularity without touching the OTel SDK directly. See [Custom Spans in Tracing](/docs/advanced-guide/custom-spans-in-tracing). ## Common gotchas - **Mismatched propagators** — if a non-GoFr service in the chain uses B3 instead of W3C, traces split. Standardize on W3C TraceContext across the fleet. - **Sidecar tracing** — Istio and Linkerd inject their own spans. Configure them to use the same backend, not a parallel one. - **Logs without trace IDs** — if `trace_id` is empty in a log, the request didn't carry a `traceparent`. Likely the entry point (Ingress, gateway) is not adding one. - **High cardinality span names** — never put a path parameter (e.g., `/orders/12345`) directly in a span name. Use the route template. ## What spans cost Each exported span is a few hundred bytes over the network plus storage. At 100% sampling and high RPS, span volume can dominate egress. Sample the way you sample logs: aggressively for routine traffic, fully for errors and slow requests (tail-based sampling). {% faq %} {% faq-item question="Which trace propagation format does GoFr use?" %} W3C TraceContext (`traceparent`, `tracestate`) and W3C Baggage. They are registered as the global OpenTelemetry propagators on app startup. {% /faq-item %} {% faq-item question="Are HTTP and gRPC traces stitched together automatically?" %} Yes. Both protocols run through the same OpenTelemetry SDK and propagators in GoFr, so a trace that hops HTTP → gRPC → HTTP shows up as one trace in the backend. {% /faq-item %} {% faq-item question="How do I correlate logs with a trace?" %} When a request has a trace context, GoFr writes the trace ID into the JSON log envelope's top-level `trace_id` field and into the nested `message` object on HTTP request logs. Search your logging backend for that value, after configuring your shipper to extract it from both spots, and you get every log entry for the request. {% /faq-item %} {% /faq %} --- ## https://gofr.dev/docs/guides/dockerizing-gofr-services # Dockerizing GoFr Services {% answer %} Two production-ready ways to ship GoFr in a container: a multi-stage build that compiles a static, CGO-disabled binary inside the image, or a copy-binary variant that lifts a CI-built binary into a minimal runtime. Both target `gcr.io/distroless/static-debian12:nonroot`, expose `HTTP_PORT` (8000) and `METRICS_PORT` (2121), read configuration from env vars, and rely on Kubernetes liveness/readiness probes (the `/.well-known/alive` and `/.well-known/health` endpoints GoFr registers) — Dockerfile `HEALTHCHECK` does not work cleanly on distroless. {% /answer %} {% howto name="Containerize a GoFr service (multi-stage)" description="Build a small, secure container image for a GoFr binary using a multi-stage Go build." steps=[{"name": "Add a multi-stage Dockerfile", "text": "Use a golang:1.25-alpine builder stage to compile a static binary, then copy it into a gcr.io/distroless/static-debian12:nonroot runtime stage."}, {"name": "Cache module downloads", "text": "COPY go.mod and go.sum first and run go mod download before copying source — combined with a BuildKit cache mount, Docker reuses module downloads across builds."}, {"name": "Compile a static binary", "text": "Set CGO_ENABLED=0 with -trimpath and use -ldflags to embed version/commit, so the runtime image needs no libc and stays minimal."}, {"name": "Run as non-root", "text": "distroless/static-debian12:nonroot already provides UID 65532; set USER nonroot:nonroot to use it."}, {"name": "Probe over HTTP from outside the container", "text": "distroless/static has no shell or wget/curl, so Dockerfile HEALTHCHECK is impractical. On Kubernetes, use livenessProbe/readinessProbe with httpGet on /.well-known/alive and /.well-known/health."}, {"name": "Build and tag", "text": "docker build with a short-SHA tag plus a semver alias so you can roll back by digest."}] /%} ## When to use this guide Use this guide when you have a GoFr service running locally with `go run` and need to package it for a registry, CI, or Kubernetes. The output is a small (typically under 20 MB), non-root image that does not ship a shell or package manager — keeping the attack surface small for production. For Kubernetes manifests that consume this image, see {% new-tab-link newtab=false title="Deploying to Kubernetes" href="/docs/guides/deploying-to-kubernetes" /%}. ## Project layout A typical containerized GoFr project looks like this: ```text my-service/ ├── main.go ├── go.mod ├── go.sum ├── configs/ │ └── .env ├── Dockerfile ├── .dockerignore └── docker-compose.yml ``` GoFr loads `configs/.env` automatically when present, but in containers you should prefer real environment variables — that is what Kubernetes ConfigMaps and Secrets inject. ## Choose your variant Two production-ready paths. Pick based on where you want compilation to happen. | Variant | When to prefer | | --- | --- | | Multi-stage build | You want a single `docker build` to produce a release-grade image. Build context lives entirely in-repo. | | Copy pre-built binary | Your CI already produces a reproducible binary (e.g., signed/attested by SLSA, GoReleaser, etc.). The image build is a thin wrapper around that artifact, so it's faster and the build context is tiny. | ## Variant A: Multi-stage Dockerfile Save this as `Dockerfile` at the repo root: ```dockerfile # syntax=docker/dockerfile:1.7 ARG GO_VERSION=1.25 ARG APP_VERSION=dev ARG GIT_COMMIT=unknown # ---------- builder ---------- FROM --platform=$BUILDPLATFORM golang:${GO_VERSION}-alpine AS builder RUN apk add --no-cache git ca-certificates WORKDIR /src # Cache module downloads in their own layer. COPY go.mod go.sum ./ RUN --mount=type=cache,target=/go/pkg/mod \ go mod download # Copy source after deps so source edits don't bust the dep cache. COPY . . ARG APP_VERSION ARG GIT_COMMIT ARG TARGETOS ARG TARGETARCH # CGO=0 + -trimpath gives a static, reproducible binary. # TARGETOS/TARGETARCH come from BuildKit so the same Dockerfile builds for # linux/amd64 and linux/arm64 unchanged. RUN --mount=type=cache,target=/go/pkg/mod \ CGO_ENABLED=0 GOOS=${TARGETOS:-linux} GOARCH=${TARGETARCH:-amd64} \ go build \ -trimpath \ -ldflags="-s -w -X main.version=${APP_VERSION} -X main.commit=${GIT_COMMIT}" \ -o /out/app ./ # ---------- runtime ---------- FROM gcr.io/distroless/static-debian12:nonroot WORKDIR /app COPY --from=builder /out/app /app/app USER nonroot:nonroot EXPOSE 8000 2121 # distroless/static has no shell and no wget/curl, so a Dockerfile HEALTHCHECK # is impractical here. On Kubernetes, use the Deployment's livenessProbe and # readinessProbe (httpGet on /.well-known/alive and /.well-known/health) — see # the Deploying to Kubernetes guide. ENTRYPOINT ["/app/app"] ``` A few things worth calling out: - **`CGO_ENABLED=0`** produces a fully statically-linked binary with no dependency on `libc` or a dynamic linker at runtime — required because `distroless/static-debian12:nonroot` ships only the binary, CA certs, `/etc/passwd`, tzdata, and a non-root user. There is no `libc` (glibc, musl, anything), no shell, no package manager. - **`TARGETOS` / `TARGETARCH` ARGs** let one Dockerfile build for `linux/amd64` and `linux/arm64` via `docker buildx build --platform=linux/amd64,linux/arm64 …` — useful when developing on Apple Silicon and deploying to amd64 nodes (or vice versa). - **`-X main.version=…`** ldflags only inject values if your `main` package declares matching variables. Add `var (version, commit string)` near the top of `main.go` if you want `gofr.Logger().Info(version, commit)` to surface the build's git SHA. - **`USER nonroot`** runs as UID 65532; combined with a read-only root filesystem in Kubernetes this satisfies most pod-security baselines. - **No bundled `configs/`**: env vars come from the platform (compose, K8s ConfigMap/Secret, cloud SSM/Secrets Manager). Do not `COPY configs/` into the runtime image — it tends to drift, and a populated `.env` is a secret. Bake only platform-independent defaults into your binary. - **Healthchecks** rely on `/.well-known/alive` (process up) and `/.well-known/health` (datasources reachable) that GoFr registers automatically. There is no `healthcheck` subcommand on the GoFr binary, and `distroless/static` has no shell or `wget`/`curl` to call the endpoint, so a Dockerfile `HEALTHCHECK` directive does not work cleanly on this base. On Kubernetes, use the Deployment's `livenessProbe` / `readinessProbe` instead (see the Deploying to Kubernetes guide). ## Variant B: Copy a pre-built binary If your CI already produces a release-grade Go binary — reproducible flags, SLSA provenance, signed by cosign, whatever your supply chain looks like — you don't need a Go toolchain inside the image. Lift the binary in. Build the binary in CI: ```bash CGO_ENABLED=0 GOOS=linux GOARCH=amd64 \ go build -trimpath -ldflags='-s -w' -o ./bin/app ./ ``` Then this is the entire Dockerfile: ```dockerfile # syntax=docker/dockerfile:1.7 FROM gcr.io/distroless/static-debian12:nonroot WORKDIR /app # `./bin/app` is the binary your CI produced one step earlier. COPY ./bin/app /app/app USER nonroot:nonroot EXPOSE 8000 2121 ENTRYPOINT ["/app/app"] ``` Why this is sometimes preferable: - **Faster image builds**: no Go toolchain, no module download, no compile step. The image build is a single `COPY`. - **Smaller build context**: `docker build` only needs `./bin/app` and the Dockerfile. Use a tight `.dockerignore` (or build with a custom context) so source isn't shipped to the daemon. - **Decoupled supply chain**: the binary and its provenance are signed once in CI and the image build never touches source. This matches SLSA Level 3+ patterns. When NOT to use this variant: - You want a single `docker build` to be the only entry-point for a fresh checkout. Variant A is more self-contained. - You're shipping arch-specific binaries from the same Dockerfile. Variant A's `TARGETARCH` flow is cleaner. ## .dockerignore Without this, `COPY . .` pulls in `.git`, local secrets, and build artifacts: ```text .git .gitignore .dockerignore Dockerfile docker-compose.yml *.md **/*_test.go bin/ dist/ configs/.env.local .env .env.* ``` ## Building and tagging ```bash docker build \ --build-arg APP_VERSION=$(git describe --tags --always) \ --build-arg GIT_COMMIT=$(git rev-parse --short HEAD) \ -t my-org/my-service:$(git rev-parse --short HEAD) \ -t my-org/my-service:latest \ . ``` Always tag with a commit SHA in addition to (or instead of) `latest`. Kubernetes `RollingUpdate` only rolls when the image reference actually changes, and `latest` is mutable. ## docker-compose for local development For local dev you usually want the service plus a few datasources. This compose file matches GoFr's default ports (HTTP `8000`, metrics `2121`): ```yaml services: app: build: . ports: - "8000:8000" - "2121:2121" environment: APP_NAME: my-service HTTP_PORT: "8000" METRICS_PORT: "2121" LOG_LEVEL: DEBUG REDIS_HOST: redis REDIS_PORT: "6379" DB_HOST: postgres DB_PORT: "5432" DB_USER: gofr DB_PASSWORD: gofr DB_NAME: gofr DB_DIALECT: postgres PUBSUB_BACKEND: KAFKA PUBSUB_BROKER: kafka:9092 depends_on: - redis - postgres - kafka redis: image: redis:7-alpine ports: ["6379:6379"] postgres: image: postgres:16-alpine environment: POSTGRES_USER: gofr POSTGRES_PASSWORD: gofr POSTGRES_DB: gofr ports: ["5432:5432"] kafka: image: bitnami/kafka:3.7 environment: KAFKA_CFG_NODE_ID: "0" KAFKA_CFG_PROCESS_ROLES: controller,broker KAFKA_CFG_CONTROLLER_QUORUM_VOTERS: "0@kafka:9093" KAFKA_CFG_LISTENERS: PLAINTEXT://:9092,CONTROLLER://:9093 KAFKA_CFG_ADVERTISED_LISTENERS: PLAINTEXT://kafka:9092 KAFKA_CFG_CONTROLLER_LISTENER_NAMES: CONTROLLER ports: ["9092:9092"] ``` The exact env var names for each datasource (Mongo, Cassandra, etc.) are documented under {% new-tab-link newtab=false title="Injecting Databases Drivers" href="/docs/advanced-guide/injecting-databases-drivers" /%}. ## Production tips - **Image size:** with `distroless/static`, a typical GoFr binary lands at 15–25 MB compressed. If you see hundreds of MB, you forgot `CGO_ENABLED=0` or copied build artifacts. - **Read-only root FS:** in Kubernetes, set `readOnlyRootFilesystem: true` and mount an `emptyDir` if the service writes temp files. - **Don't bake secrets:** never `COPY` a populated `.env` into the runtime image. Inject via Kubernetes Secrets instead. - **Pin the Go version:** the `ARG GO_VERSION` lets CI build the same image deterministically. - **Build cache:** Variant A's Dockerfile already includes the `--mount=type=cache,target=/go/pkg/mod` cache mount on both `go mod download` and `go build`; just use BuildKit (default in `docker buildx`, or set `DOCKER_BUILDKIT=1`) to keep the module cache warm between CI runs. ## Verification A hello-world GoFr service (no datasources) needs no env injection: ```bash docker build -t my-service:dev . docker run --rm -p 8000:8000 -p 2121:2121 my-service:dev # In another shell: curl -s http://localhost:8000/.well-known/alive # {"data":{"status":"UP"}} curl -s http://localhost:2121/metrics | head # # HELP app_http_response ... # # TYPE app_http_response histogram ``` A real service with datasources needs env vars. Use `--env-file`: ```bash cat > .env.dev <<'EOF' APP_NAME=my-service HTTP_PORT=8000 METRICS_PORT=2121 LOG_LEVEL=DEBUG REDIS_HOST=host.docker.internal REDIS_PORT=6379 DB_HOST=host.docker.internal DB_PORT=5432 DB_USER=gofr DB_PASSWORD=gofr DB_NAME=gofr DB_DIALECT=postgres EOF docker run --rm -p 8000:8000 -p 2121:2121 --env-file .env.dev my-service:dev # Same curl checks as above. # Inspect image size and layers: docker image inspect my-service:dev --format '{{.Size}}' docker history my-service:dev ``` {% faq %} {% faq-item question="Why distroless instead of alpine?" %} Alpine includes BusyBox, apk, and a shell — useful for debugging but extra attack surface. Distroless ships only what your binary needs, so CVEs in shells and package managers cannot affect you. If you need to debug, run `docker run --rm -it --entrypoint sh` against the *builder* stage instead of the runtime image. {% /faq-item %} {% faq-item question="Can I use scratch instead of distroless?" %} Yes — `FROM scratch` is even smaller, but you must `COPY` `/etc/ssl/certs/ca-certificates.crt` yourself for HTTPS to work. Distroless includes that plus `nonroot` user mappings, which is why it is the default recommendation here. {% /faq-item %} {% faq-item question="How do I run database migrations on container start?" %} Use a Kubernetes Job, an init container that runs the same image with a different argument, or wire migrations into a GoFr `OnStart` hook (see {% new-tab-link newtab=false title="Startup Hooks" href="/docs/advanced-guide/startup-hooks" /%}). Migrations on every replica on startup is racy under HPA. {% /faq-item %} {% /faq %} --- ## https://gofr.dev/docs/guides/graceful-shutdown # Graceful Shutdown {% answer %} GoFr listens for `SIGINT` and `SIGTERM` and, on either signal, runs `App.Shutdown` which calls `Shutdown` on the HTTP, gRPC, and metrics servers and `Close` on the container's datasource connections. The shutdown is bounded by `SHUTDOWN_GRACE_PERIOD` (default `30s`); if it expires the process exits with whatever connections remain. Pair this with Kubernetes' `terminationGracePeriodSeconds` and a small `preStop` sleep to avoid losing in-flight requests during rolling restarts. {% /answer %} ## When to use Every production GoFr deployment on Kubernetes should be configured for graceful shutdown. Without it, rolling updates and node drains return 502/504s for any request that is mid-flight when a pod is terminated, and Pub/Sub consumers can lose un-committed messages. ## How GoFr handles signals `App.Run` sets up a signal-aware context: ```go ctx, stop := signal.NotifyContext(context.Background(), os.Interrupt, syscall.SIGINT, syscall.SIGTERM) ``` When that context is canceled, a goroutine creates a timeout context using `SHUTDOWN_GRACE_PERIOD` (default `30s`) and calls `App.Shutdown`. The order is fixed by the framework — see [`pkg/gofr/gofr.go:96-114`](https://github.com/gofr-dev/gofr/blob/main/pkg/gofr/gofr.go) — and `Shutdown` joins errors from each step: 1. `httpServer.Shutdown(ctx)` — stops accepting new connections, waits for in-flight handlers 2. `grpcServer.Shutdown(ctx)` — drains active streams 3. `container.Close()` — closes SQL pools, Redis clients, Pub/Sub consumers, and other registered datasources 4. `metricServer.Shutdown(ctx)` — stops `/metrics` 5. Logger close — if the logger implements `io.Closer`, its `Close()` is called last The container's `Close` is what commits Pub/Sub offsets and lets SQL drivers finish in-progress queries. Application code does not need to coordinate this order. ## OnStart hooks vs shutdown hooks GoFr exposes [OnStart hooks](/docs/advanced-guide/startup-hooks) for synchronous startup work (cache warmup, seeding). There is no public `OnShutdown` hook today; `App.Shutdown` is what gets called and it operates on the framework's own resources. If you need cleanup on exit for resources you own (custom goroutines, file handles, third-party clients), use context-cancellation: pass a `context.Context` derived from `signal.NotifyContext(...)` into your goroutines and have each goroutine `defer` its own cleanup when that context is cancelled. The framework's `App.Shutdown` runs concurrently with this, so total wind-down stays within `SHUTDOWN_GRACE_PERIOD`. ## The Kubernetes termination flow When kubelet decides to evict a pod, it executes this sequence: 1. Pod's status flips to `Terminating`; endpoints controllers begin removing the pod from Service `Endpoints`. 2. `preStop` hook runs (if configured). 3. `SIGTERM` is sent to PID 1. 4. After `terminationGracePeriodSeconds` (default 30s), `SIGKILL` is sent. Steps 1 and 3 race: kube-proxy on every node needs time to update iptables/IPVS rules. A pod can still receive new traffic for a second or two after `SIGTERM`. The fix is a `preStop` sleep that delays shutdown long enough for endpoint removal to propagate. ```yaml spec: terminationGracePeriodSeconds: 60 containers: - name: api image: ghcr.io/example/orders-api:1.4.2 lifecycle: preStop: exec: command: ["/bin/sh", "-c", "sleep 5"] env: - name: SHUTDOWN_GRACE_PERIOD value: "45s" readinessProbe: httpGet: path: /.well-known/health port: 8000 livenessProbe: httpGet: path: /.well-known/alive port: 8000 ``` ### Sizing the grace period Set the values so `preStop` + `SHUTDOWN_GRACE_PERIOD` is comfortably less than `terminationGracePeriodSeconds`. A useful starting point: - `preStop`: 5s (covers endpoint propagation on most clusters) - `SHUTDOWN_GRACE_PERIOD`: P99 request latency × 2, plus headroom for Pub/Sub commits - `terminationGracePeriodSeconds`: `preStop` + `SHUTDOWN_GRACE_PERIOD` + 10s buffer For a service with 2s P99, that's 5s + 30s + 10s = 45–60s. ## Per-datasource behavior - **SQL.** `database/sql` waits for active queries to finish on `Close()`. Long-running transactions can extend shutdown — keep request timeouts shorter than `SHUTDOWN_GRACE_PERIOD`. - **Redis / NoSQL.** Clients close idle connections immediately and wait for in-flight commands. - **Pub/Sub.** GoFr's subscription manager respects the shutdown context — consumers stop polling and commit current offsets where the broker supports it (Kafka, NATS JetStream). - **Cron jobs.** GoFr's `App.Shutdown` drains HTTP, gRPC, and metrics servers and closes datasource connections — it does **not** stop the cron scheduler or wait for in-flight cron tasks. Cron jobs run with `context.Background()`, so they continue past SIGTERM and may be cut off when the container is killed at `terminationGracePeriodSeconds`. If you have long-running cron work that must finish, run it as a separate Kubernetes `Job` triggered by a `CronJob` resource instead of inside the same pod, so the pod's lifecycle doesn't interrupt it. ## Verification Trigger a rolling restart and watch the logs: ```bash kubectl rollout restart deployment/orders-api -n prod kubectl logs -f -l app=orders-api -n prod --previous ``` You should see `Shutting down server with a timeout of 30s` followed by `Application shutdown complete` on each terminating pod, with no `connection reset` errors on the client side. From a load-test client running during the restart, error rate should stay below 0.1%. {% faq %} {% faq-item question="What is the default SHUTDOWN_GRACE_PERIOD in GoFr?" %} 30 seconds. It is configurable via the `SHUTDOWN_GRACE_PERIOD` env var and accepts any Go duration string (e.g., `45s`, `1m30s`). {% /faq-item %} {% faq-item question="Do I need a preStop hook if GoFr already handles SIGTERM?" %} Yes, on Kubernetes. The preStop sleep covers the brief window before kube-proxy updates iptables on every node — without it, pods can receive new connections after SIGTERM has already started the drain. {% /faq-item %} {% faq-item question="What happens if shutdown takes longer than SHUTDOWN_GRACE_PERIOD?" %} The shutdown context expires, `App.Shutdown` returns the deadline error, and Kubernetes will eventually `SIGKILL` the process when `terminationGracePeriodSeconds` elapses. {% /faq-item %} {% /faq %} --- ## https://gofr.dev/docs/guides/helm-chart-starter # Helm Chart Starter {% answer %} This is a copy-paste reference Helm chart for a GoFr microservice. It assumes the application listens on the framework defaults — HTTP 8000, gRPC 9000, metrics 2121 — and uses `/.well-known/alive` and `/.well-known/health` for probes. Use it as the starting point for your own chart. {% /answer %} {% howto name="Package a GoFr service as a Helm chart" description="Build a minimal Helm chart for a GoFr microservice with templated Deployment, Service, ConfigMap, and probe wiring." steps=[{"name": "Create the chart skeleton", "text": "Generate Chart.yaml with appVersion plus apiVersion v2 and a values.yaml capturing image, replicas, env, resources, and probes."}, {"name": "Template the Deployment", "text": "In templates/deployment.yaml render replicas, image, envFrom (ConfigMap + Secret), readinessProbe at /.well-known/health and livenessProbe at /.well-known/alive."}, {"name": "Template the Service", "text": "In templates/service.yaml expose port 8000 (HTTP) and 2121 (metrics) as named ports for Prometheus scraping."}, {"name": "Wire ConfigMap and Secret", "text": "Mount values.env via ConfigMap for non-secrets and a separate Secret for credentials; both via envFrom."}, {"name": "Lint and template", "text": "Run helm lint and helm template to verify YAML output before installing."}, {"name": "Install and upgrade", "text": "helm install for first deploy, then helm upgrade --install on subsequent rollouts; tag image by digest for repeatability."}] /%} {% callout type="note" title="Prefer a maintained chart?" %} The reference chart below is intentionally minimal so you can read every line. If you'd rather depend on a maintained chart, the community chart at [zop/service](https://github.com/zopdev/helm-charts/tree/main/charts/service) covers the same shape (Deployment + Service + optional Ingress/HPA + probes). ```bash helm repo add zop https://helm.zop.dev helm install my-app zop/service ``` Override values with `-f values.yaml` or `--set`; the chart's `values.schema.json` marks user-mutable fields with `"mutable": true`. {% /callout %} This is reference material, not a published chart. A future `gofr-dev/gofr-k8s-starter` repo could host a maintained version. For now, copy the files below into a `chart/` directory in your service repo. ## Layout ```text chart/ ├── Chart.yaml ├── values.yaml └── templates/ ├── _helpers.tpl ├── deployment.yaml └── service.yaml ``` ## Chart.yaml ```yaml apiVersion: v2 name: gofr-service description: A reference Helm chart for a GoFr microservice type: application version: 0.1.0 appVersion: "0.1.0" ``` ## values.yaml ```yaml image: repo: ghcr.io/example/my-gofr-service tag: latest pullPolicy: IfNotPresent replicaCount: 2 service: type: ClusterIP httpPort: 8000 grpcPort: 9000 metricsPort: 2121 resources: requests: cpu: 100m memory: 128Mi limits: cpu: 500m memory: 512Mi env: {} # DB_HOST: db.svc # LOG_LEVEL: INFO # TRACE_EXPORTER: otlp # TRACER_URL: tempo:4317 envFromSecrets: [] # - my-db-credentials ingress: enabled: false className: nginx host: api.example.com tls: enabled: false secretName: api-tls autoscaling: enabled: false minReplicas: 2 maxReplicas: 10 targetCPUUtilizationPercentage: 70 podSecurityContext: runAsNonRoot: true runAsUser: 65532 fsGroup: 65532 securityContext: readOnlyRootFilesystem: true allowPrivilegeEscalation: false capabilities: drop: [ALL] ``` The default ports (8000, 9000, 2121) match GoFr's defaults verified in `pkg/gofr/default.go`. ## templates/_helpers.tpl ```yaml {{- define "gofr-service.name" -}} {{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}} {{- end -}} {{- define "gofr-service.fullname" -}} {{- printf "%s-%s" .Release.Name (include "gofr-service.name" .) | trunc 63 | trimSuffix "-" -}} {{- end -}} {{- define "gofr-service.labels" -}} app.kubernetes.io/name: {{ include "gofr-service.name" . }} app.kubernetes.io/instance: {{ .Release.Name }} app.kubernetes.io/managed-by: {{ .Release.Service }} helm.sh/chart: {{ printf "%s-%s" .Chart.Name .Chart.Version }} {{- end -}} ``` ## templates/deployment.yaml ```yaml apiVersion: apps/v1 kind: Deployment metadata: name: {{ include "gofr-service.fullname" . }} labels: {{ include "gofr-service.labels" . | nindent 4 }} spec: replicas: {{ .Values.replicaCount }} selector: matchLabels: app.kubernetes.io/name: {{ include "gofr-service.name" . }} app.kubernetes.io/instance: {{ .Release.Name }} template: metadata: labels: {{ include "gofr-service.labels" . | nindent 8 }} annotations: prometheus.io/scrape: "true" prometheus.io/port: "{{ .Values.service.metricsPort }}" prometheus.io/path: "/metrics" spec: securityContext: {{ toYaml .Values.podSecurityContext | nindent 8 }} containers: - name: app image: "{{ .Values.image.repo }}:{{ .Values.image.tag }}" imagePullPolicy: {{ .Values.image.pullPolicy }} ports: - name: http containerPort: {{ .Values.service.httpPort }} - name: grpc containerPort: {{ .Values.service.grpcPort }} - name: metrics containerPort: {{ .Values.service.metricsPort }} env: - name: HTTP_PORT value: "{{ .Values.service.httpPort }}" - name: GRPC_PORT value: "{{ .Values.service.grpcPort }}" - name: METRICS_PORT value: "{{ .Values.service.metricsPort }}" {{- range $k, $v := .Values.env }} - name: {{ $k }} value: {{ $v | quote }} {{- end }} {{- with .Values.envFromSecrets }} envFrom: {{- range . }} - secretRef: name: {{ . }} {{- end }} {{- end }} livenessProbe: httpGet: path: /.well-known/alive port: http initialDelaySeconds: 5 periodSeconds: 10 readinessProbe: httpGet: path: /.well-known/health port: http initialDelaySeconds: 5 periodSeconds: 10 resources: {{ toYaml .Values.resources | nindent 12 }} securityContext: {{ toYaml .Values.securityContext | nindent 12 }} terminationGracePeriodSeconds: 30 ``` A few choices worth calling out: - The probe paths are GoFr's built-in endpoints. `/.well-known/alive` is cheap and exempt from auth by default; `/.well-known/health` includes dependency status and is more truthful for readiness. - Env vars `HTTP_PORT`, `GRPC_PORT`, `METRICS_PORT` are set explicitly so the container ports and probe ports always agree with what GoFr actually binds. - The Prometheus scrape annotations point to `metricsPort`. If your platform uses ServiceMonitor/PodMonitor instead, drop the annotations and add a separate template. - `terminationGracePeriodSeconds: 30` gives GoFr's graceful shutdown time to drain in-flight requests. ## templates/service.yaml ```yaml apiVersion: v1 kind: Service metadata: name: {{ include "gofr-service.fullname" . }} labels: {{ include "gofr-service.labels" . | nindent 4 }} spec: type: {{ .Values.service.type }} ports: - name: http port: {{ .Values.service.httpPort }} targetPort: http - name: grpc port: {{ .Values.service.grpcPort }} targetPort: grpc - name: metrics port: {{ .Values.service.metricsPort }} targetPort: metrics selector: app.kubernetes.io/name: {{ include "gofr-service.name" . }} app.kubernetes.io/instance: {{ .Release.Name }} ``` ## Optional: Ingress and HPA Add `templates/ingress.yaml` gated on `.Values.ingress.enabled` and `templates/hpa.yaml` gated on `.Values.autoscaling.enabled`. Keep them off by default so the chart stays simple for first-time users. ## Using the chart ```bash helm upgrade --install my-api ./chart \ --set image.tag=$(git rev-parse --short HEAD) \ --set 'env.LOG_LEVEL=INFO' \ --wait --timeout 5m ``` Pin the image tag to a Git SHA in production, never `latest`. ## Probes choice If `/.well-known/health` is slow because it pings databases, you can split: - **Liveness** → `/.well-known/alive` (process is up) - **Startup probe** → `/.well-known/health` (deps reachable; tolerate failure during boot) - **Readiness** → `/.well-known/alive` once startup passes, to avoid removing pods on transient DB blips Tune per service. {% faq %} {% faq-item question="Are these the official GoFr Helm templates?" %} No. This is reference material to copy into your service repo. A future `gofr-dev/gofr-k8s-starter` repo could host a maintained chart. {% /faq-item %} {% faq-item question="Why probe `/.well-known/health` instead of `/.well-known/alive` for readiness?" %} `/health` includes dependency status, so a pod with a broken DB connection will be removed from service endpoints. `/alive` only confirms the process is running, which is what you want for liveness. {% /faq-item %} {% faq-item question="Do I need separate Service ports for HTTP, gRPC, and metrics?" %} Yes if you want all three reachable. They listen on different ports (8000/9000/2121 by default) and need separate Service entries. {% /faq-item %} {% /faq %} --- ## https://gofr.dev/docs/guides/horizontal-pod-autoscaler # Horizontal Pod Autoscaler for GoFr {% answer %} GoFr exposes Prometheus metrics on `METRICS_PORT` (default 2121), which Kubernetes HPA v2 can read through prometheus-adapter. You can scale on CPU plus custom application signals, such as requests-per-second derived from GoFr's default HTTP histogram, by writing a discovery rule in the adapter and a `HorizontalPodAutoscaler` manifest that references it. {% /answer %} {% howto name="Autoscale a GoFr service with HPA" description="Configure Kubernetes Horizontal Pod Autoscaler against GoFr metrics — CPU first, custom request rate via prometheus-adapter when needed." steps=[{"name": "Enable metrics-server", "text": "kubectl apply the metrics-server manifest (or enable the minikube addon) so HPA can read pod CPU and memory."}, {"name": "Set resource requests", "text": "Set resources.requests.cpu on the GoFr Deployment — HPA computes utilization as a percentage of this."}, {"name": "Apply a CPU-based HPA", "text": "Apply autoscaling/v2 HPA with target averageUtilization on cpu (60% is a sane start); set min and max replicas based on baseline traffic."}, {"name": "Tune scale behavior", "text": "Set behavior.scaleUp.stabilizationWindowSeconds to absorb spikes and behavior.scaleDown to avoid flapping."}, {"name": "Optional: custom metrics", "text": "Install prometheus-adapter and define a rule that exposes app_http_response_count as a custom metric for HPA targeting."}, {"name": "Verify under load", "text": "Generate load with hey or k6; watch kubectl get hpa to confirm replicas grow and shrink as expected."}] /%} ## When to use Reach for HPA when traffic is bursty and a fixed replica count either over-provisions during quiet periods or under-serves during spikes. CPU autoscaling alone tends to lag behind I/O-bound workloads — a GoFr service waiting on a downstream HTTP call has low CPU but a long queue. Custom-metric HPA on QPS or latency closes that gap. For event-driven workloads (Kafka, NATS, MQTT) HPA cannot scale to zero; use [KEDA](https://keda.sh) for that. ## GoFr metrics that drive HPA GoFr publishes a {% new-tab-link newtab=true title="default set of HTTP, datasource, and runtime metrics" href="/docs/quick-start/observability" /%} on `METRICS_PORT` at `/metrics`. The HTTP server records `app_http_response` (a histogram), so requests-per-second can be derived as `rate(app_http_response_count[1m])`. You can also publish your own counters and histograms — see [Publishing Custom Metrics](/docs/advanced-guide/publishing-custom-metrics). Make sure your Pod template advertises the metrics port and a Prometheus scrape annotation (or a `ServiceMonitor` if you run prometheus-operator): ```yaml ports: - name: http containerPort: 8000 - name: metrics containerPort: 2121 ``` ## prometheus-adapter rule prometheus-adapter exposes Prometheus series as `custom.metrics.k8s.io` so HPA can query them. A minimal rule that surfaces per-pod RPS for a GoFr Deployment looks like: ```yaml rules: - seriesQuery: 'app_http_response_count{namespace!="",pod!=""}' resources: overrides: namespace: { resource: namespace } pod: { resource: pod } name: matches: "^app_http_response_count$" as: "http_requests_per_second" metricsQuery: | sum(rate(<<.Series>>{<<.LabelMatchers>>}[1m])) by (<<.GroupBy>>) ``` Verify with `kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1/namespaces//pods/*/http_requests_per_second"`. ## HPA v2 manifest ```yaml apiVersion: autoscaling/v2 kind: HorizontalPodAutoscaler metadata: name: orders-api namespace: prod spec: scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: orders-api minReplicas: 3 maxReplicas: 30 metrics: - type: Resource resource: name: cpu target: type: Utilization averageUtilization: 70 - type: Pods pods: metric: name: http_requests_per_second target: type: AverageValue averageValue: "50" behavior: scaleUp: stabilizationWindowSeconds: 30 policies: - type: Percent value: 100 periodSeconds: 30 scaleDown: stabilizationWindowSeconds: 300 policies: - type: Percent value: 25 periodSeconds: 60 ``` The `behavior` block is the difference between an HPA that flaps and one that holds. Short `scaleUp.stabilizationWindowSeconds` reacts to bursts; long `scaleDown.stabilizationWindowSeconds` prevents thrashing when traffic drops momentarily. ## Gotchas - **Cold starts.** A new GoFr pod must finish [OnStart hooks](/docs/advanced-guide/startup-hooks) (cache warmup, migrations) before serving. Set `minReadySeconds` on the Deployment and a `readinessProbe` against `/.well-known/health` so HPA doesn't count not-ready pods toward capacity. - **Resource requests are mandatory.** HPA's CPU calculation is `usage / request`. If the Deployment omits `resources.requests.cpu`, CPU-based scaling is silently disabled. - **HPA cannot scale to zero.** `minReplicas: 0` is rejected by the API server. If you need scale-to-zero for cron-like workloads, use KEDA. - **Adapter discovery interval.** prometheus-adapter polls Prometheus every 30s by default. New metric series take up to a minute to appear in `custom.metrics.k8s.io`. ## Verification ```bash kubectl get hpa orders-api -n prod kubectl describe hpa orders-api -n prod kubectl top pods -n prod -l app=orders-api ``` `describe` prints the `Metrics` block with current vs target values; mismatched units (e.g., `m` vs whole numbers) are the most common reason HPA reports `unknown`. {% faq %} {% faq-item question="Does GoFr need any code changes for HPA to work?" %} No. GoFr already exposes Prometheus-format metrics on `METRICS_PORT` (default 2121). HPA configuration lives entirely in the adapter rule and the HPA manifest. {% /faq-item %} {% faq-item question="Can I scale a GoFr Pub/Sub subscriber with HPA?" %} HPA can scale on CPU, but consumer-lag-based scaling is better handled by KEDA's Kafka or NATS scalers, which can also scale to zero between batches. {% /faq-item %} {% faq-item question="Why does my HPA show for the custom metric?" %} Either prometheus-adapter has not discovered the series yet, the metric name in the HPA manifest does not match the rule's `as:` value, or the labels (`namespace`, `pod`) are missing on the Prometheus series. {% /faq-item %} {% /faq %} --- ## https://gofr.dev/docs/guides/load-testing # Load Testing {% answer %} Load test GoFr services with k6 or vegeta from outside the cluster, scrape Prometheus during the run for server-side truth, and look at p50/p95/p99 latency, error rate, and throughput together — never just an average. The framework gives you the metrics surface; the test is your responsibility. {% /answer %} ## What to measure A single number ("we did 5k RPS") is not enough. Always report the tuple: - **Latency percentiles** — p50, p95, p99. Averages hide tails. - **Error rate** — non-2xx and timeouts as a percentage of total. - **Throughput** — RPS sustained without error rate climbing. - **Saturation** — CPU, memory, DB connections in use, GC pause time. Latency degrades sharply once any of these saturate. Run the test long enough to see whether numbers are stable. The first 30 seconds usually contain JIT/warmup artifacts. ## k6 example ```javascript import http from 'k6/http'; import { check, sleep } from 'k6'; export const options = { stages: [ { duration: '30s', target: 50 }, // ramp up { duration: '2m', target: 50 }, // steady { duration: '30s', target: 200 }, // step up { duration: '2m', target: 200 }, // steady { duration: '30s', target: 0 }, // ramp down ], thresholds: { http_req_duration: ['p(95)<300', 'p(99)<800'], http_req_failed: ['rate<0.01'], }, }; export default function () { const res = http.get('https://api.example.com/orders/42'); check(res, { 'status is 200': (r) => r.status === 200 }); sleep(1); } ``` Run with `k6 run --out json=results.json script.js`. Thresholds turn the run into a pass/fail. ## vegeta example For simpler GET-heavy tests, vegeta is one shell command: ```bash echo "GET https://api.example.com/orders/42" | \ vegeta attack -rate=100 -duration=2m | \ vegeta report -type=hist[0,10ms,50ms,100ms,500ms,1s] ``` vegeta also writes raw results that you can replay through `vegeta plot` for visual inspection. ## Capturing GoFr metrics during the test GoFr exposes Prometheus metrics on `METRICS_PORT` (default 2121, see `pkg/gofr/factory.go`). Scrape them during the run for server-side truth. Useful series: - HTTP request latency histograms (p50/p95/p99 per route). - Request count and status code distribution. - Outbound HTTP service circuit breaker state — `app_http_circuit_breaker_state` (see [Circuit Breaker](/docs/advanced-guide/circuit-breaker)). - Go runtime: `go_goroutines`, `go_gc_duration_seconds`, `process_resident_memory_bytes`. A 3-minute test should be reflected in a Grafana dashboard with at least these panels open. If client-side and server-side latency diverge, suspect the network or the load generator. ## Bottleneck triage When latency rises, look in this order: 1. **Application CPU** — saturated CPU means you are compute-bound or doing too much per request. Profile with `pprof`: GoFr already mounts the standard `net/http/pprof` handlers (`/debug/pprof/`, `/debug/pprof/profile`, `/debug/pprof/heap`, etc.) on the metrics server (port `METRICS_PORT`, default 2121) — fetch a profile with `go tool pprof http://:2121/debug/pprof/profile`. There's no need to register your own handler. In production, restrict access to that port to your internal network, since it exposes goroutine and heap profiles. 2. **Database** — slow queries, connection pool exhaustion, lock waits. Check the SQL datasource's pool stats and DB-side metrics. `MaxOpenConns` is often the culprit. 3. **Downstream services** — GoFr's outbound HTTP client metrics show which downstream is slowing. Circuit breaker transitions are visible via `app_http_circuit_breaker_state`. 4. **GC** — long GC pauses correlate with allocation in hot paths. `go_gc_duration_seconds` and `runtime/metrics` show this. 5. **Network** — load generator can't push more, or NLB/ALB connection limits. Run the generator from inside the cluster to compare. ## Establish a baseline before changes Run the same scenario monthly and on every major release. Save the k6/vegeta output and the Prometheus snapshots. Regressions become obvious only when there is a baseline to compare against. This guide deliberately does not publish baseline numbers — they depend entirely on your hardware, payload, and dependencies. ## Test from a realistic location Running k6 from a developer laptop hits the public Internet path. That is fine for end-to-end SLO checks, but if you want to know "how fast is GoFr itself", run the generator inside the same cluster on a pod with no resource limits, hitting the Service ClusterIP. That isolates the application from edge variability. ## Avoiding self-DoS - Do not point load tests at a shared production database. - Spin up a dedicated namespace with the same Helm values as production but a separate datasource. - Cap `vus` (virtual users) below the connection ceiling of any rate-limited downstream you cannot mock. ## Reporting Capture, for each run: - Test scenario (request mix, ramp profile, total duration). - Service version (image SHA). - p50/p95/p99 latency, error rate, throughput. - Resource usage at peak (CPU%, memory, DB connections). - The git commit and any feature flags toggled. This metadata is what makes a regression diagnosable a month later. {% faq %} {% faq-item question="Where do I scrape GoFr's metrics during a load test?" %} On the metrics port, default 2121, configurable via `METRICS_PORT`. Path is `/metrics`. Set `METRICS_PORT=0` to disable. {% /faq-item %} {% faq-item question="Should I report average latency?" %} No. Average hides tail latency, which is where users feel pain. Always report p50, p95, and p99 together, plus error rate. {% /faq-item %} {% faq-item question="k6 or vegeta?" %} k6 is better for scripted scenarios with thresholds and assertions. vegeta is faster to set up for steady-rate GET load. Both work fine against GoFr. {% /faq-item %} {% /faq %} --- ## https://gofr.dev/docs/guides/multi-environment-deployment # Multi-Environment Deployment {% answer %} GoFr selects per-environment configuration through the `APP_ENV` env var, which picks `configs/..env` at startup; in Kubernetes you ship one image and override every value through environment-specific ConfigMaps, Secrets, and Helm values files. Keep namespaces or clusters isolated, point each environment at its own database and tracing endpoint, and promote by tag — never by rebuilding for the target. {% /answer %} ## When to use Any time you have more than one running copy of a service — even if it's just `dev` and `prod` — you need a deployment story that prevents config drift. GoFr's twelve-factor config makes the *what* easy; this guide covers the *how* on Kubernetes. ## One image, many environments The build artifact never changes between environments. The same image digest that ran in staging for a day promotes to production. Everything that varies — connection strings, log levels, feature flags, replica count, resource limits — comes from the cluster. ```text git push tag v1.4.2 ──▶ CI builds image, signs, pushes ──▶ deploy to staging (APP_ENV=staging) ──▶ smoke + soak ──▶ deploy to prod (APP_ENV=prod) ``` GoFr reads `APP_ENV` to decide which override file to overlay on `configs/.env`. In Kubernetes, the override file is largely vestigial — every value comes from a `ConfigMap` or `Secret` injected with `envFrom` (see [Twelve-Factor Config](/docs/guides/twelve-factor-config)). Ship the same image to every environment and differentiate behavior through env / ConfigMap / Helm values rather than by branching on `APP_ENV` inside `main.go` — the moment two environments execute different code paths, the artifact you tested in staging stops being the artifact running in production. ## Namespace per env vs cluster per env **Namespace per env** (`staging`, `prod` in the same cluster) is cheaper and simpler, but shares a control plane and nodes — a runaway prod workload can starve staging, and compliance frameworks often reject it for regulated data. **Cluster per env** isolates everything but doubles operational overhead. Most teams start with namespaces and graduate to separate prod clusters once compliance or noisy-neighbor pressure forces the move. Whichever you pick, never share the same database, broker, or tracing backend across envs. ## Helm values per environment Keep one chart, one `values.yaml` for defaults, and one overrides file per env. Per-env files override only what's different — replica count, log level, datasource hosts. ```yaml # values.yaml replicaCount: 2 image: { repository: ghcr.io/example/orders-api } config: HTTP_PORT: "8000" METRICS_PORT: "2121" LOG_LEVEL: INFO TRACE_EXPORTER: otlp SHUTDOWN_GRACE_PERIOD: 30s ``` ```yaml # values-staging.yaml image: { tag: 1.4.2 } config: APP_ENV: staging LOG_LEVEL: DEBUG DB_HOST: postgres.staging.svc.cluster.local # GoFr's OTLP exporter speaks gRPC; use bare host:port (no http://) and the # OTLP gRPC port 4317 (4318 is OTLP HTTP, which GoFr does NOT use). TRACER_URL: otel-collector.observability.svc.cluster.local:4317 ``` ```yaml # values-prod.yaml replicaCount: 10 image: { tag: 1.4.2 } config: APP_ENV: prod DB_HOST: postgres-primary.prod.svc.cluster.local TRACER_URL: otel-collector.observability.svc.cluster.local:4317 DB_MAX_OPEN_CONNECTION: "20" ``` Apply with `helm upgrade --install orders-api ./chart -n prod -f values.yaml -f values-prod.yaml`. Same chart, same image tag, different values → different environment. ## Promotion flow CI tags an image (`1.4.2`). `helm upgrade` deploys it to staging; after integration tests and a soak window, the same `1.4.2` tag promotes to prod. If a problem surfaces, `helm rollback orders-api -n prod` reverts. Never `docker build` again between envs — that invalidates the artifact you tested. ## Datasource separation Each environment must point at its own datasources. Sharing a database across staging and prod is a data-corruption incident waiting to happen — staging migrations can drop columns prod still reads. - Separate `DB_HOST` / `DB_NAME` per env. - Separate Pub/Sub topics or namespaces (Kafka cluster + topic prefix, NATS account, MQTT broker). - Separate Redis instances or at least separate `REDIS_DB` numbers. - Separate object storage buckets. For databases under heavy migration churn, give staging its own writable replica with a nightly snapshot from prod — close enough to be representative, isolated enough to be safe. See [Handling Data Migrations](/docs/advanced-guide/handling-data-migrations) for the migration story itself. ## Telemetry segregation Tag every signal with the environment so dashboards and alerts can filter. Set a different `TRACER_URL` per env, or share a collector with an `env` resource attribute; use `TRACER_RATIO` (default 1) to drop prod sampling if volume is too high. Use `LOG_LEVEL=DEBUG` in staging, `INFO` in prod, and toggle without redeploying via `REMOTE_LOG_URL` (see [Remote Log Level Change](/docs/advanced-guide/remote-log-level-change)). Add an `env` Prometheus label via your scrape config so the same alert rule can fire per-environment with different thresholds. Staging alerts should page a chat channel; prod alerts page on-call. ## Verification ```bash kubectl exec -n prod deploy/orders-api -- env | grep -E '^(APP_ENV|DB_HOST|TRACER_URL|LOG_LEVEL)=' curl https://orders-api.prod.example.com/.well-known/health ``` The first command verifies the running container actually has the env you expect; the second confirms the service is reachable. {% faq %} {% faq-item question="What is the env var name for selecting the environment in GoFr?" %} `APP_ENV`. GoFr uses it to overlay `configs/..env` on top of `configs/.env`, and you can read it from application code via `app.Config.Get("APP_ENV")`. {% /faq-item %} {% faq-item question="Should staging and production share a database?" %} No. Migrations applied in staging can break the schema prod relies on, and any data leakage is a compliance incident. Always run separate databases (or separate writable instances of the same engine). {% /faq-item %} {% faq-item question="How do I change log level without redeploying?" %} Set `REMOTE_LOG_URL` to a control-plane endpoint and adjust the level there — GoFr polls on `REMOTE_LOG_FETCH_INTERVAL` (an integer number of seconds; default `15`, i.e. 15s). See the Remote Log Level Change guide. {% /faq-item %} {% /faq %} --- ## https://gofr.dev/docs/guides/production-logging # Production Logging {% answer %} GoFr emits structured JSON logs to stdout when not attached to a TTY, with fields `level`, `time`, `message`, `trace_id`, and `gofrVersion`. Set the threshold via `LOG_LEVEL` (DEBUG, INFO, NOTICE, WARN, ERROR, FATAL), correlate by `trace_id`, and ship to Loki, CloudWatch, or Stackdriver from the container's stdout — no in-app shippers needed. {% /answer %} ## The log line shape GoFr's logger writes JSON when the output is not a terminal (verified in `pkg/gofr/logging/logger.go`). The top-level envelope has: | Field | Source | |---------------|---------------------------------------------| | `level` | One of DEBUG, INFO, NOTICE, WARN, ERROR, FATAL | | `time` | RFC3339Nano timestamp (Go's default `time.Time` JSON marshaling, per `pkg/gofr/logging/logger.go:55`) | | `message` | The argument passed to the logger — a string for app logs, or a structured object for HTTP request logs | | `trace_id` | W3C trace ID, `omitempty` — present only when the call site supplies a trace context | | `gofrVersion` | Framework version baked into the binary | For HTTP request logs (emitted by the request-logging middleware), the value of `message` is itself a structured `RequestLog` object with the fields `trace_id`, `span_id`, `start_time`, `response_time`, `method`, `user_agent`, `ip`, `uri`, and `response`. So on a request log line you will see `trace_id` both at the top level and nested inside `message` — by design: the top-level field is for log aggregators, the nested copy is part of the request record. A typical container log stream therefore looks like: ```json {"level":"INFO","time":"...","message":"Loaded config from file: ./configs/.env","gofrVersion":"v1.46.0"} {"level":"INFO","time":"...","message":{"trace_id":"7ca3...","span_id":"...","method":"GET","uri":"/orders","response":200},"gofrVersion":"v1.46.0"} ``` In a TTY (local development), the output is human-readable colored text; in containers it is one JSON object per line — what every log shipper expects. ## Log levels GoFr's `Level` type and string mapping live in `pkg/gofr/logging/level.go`. Set the level at startup with the `LOG_LEVEL` environment variable. The default is INFO. Available values: - `DEBUG` — verbose; use during incident investigation. - `INFO` — normal operations. - `NOTICE` — significant non-error events. - `WARN` — recoverable problems. - `ERROR` — errors that need attention. - `FATAL` — process exits. Changing the level normally requires a redeploy. To avoid that, GoFr supports remote runtime updates — see [Remote Log Level Change](/docs/advanced-guide/remote-log-level-change). Configure `REMOTE_LOG_URL` and `REMOTE_LOG_FETCH_INTERVAL` and the level can be flipped to DEBUG mid-incident without restarting pods. ## Correlating logs with traces Every HTTP request flows through the tracer middleware (`pkg/gofr/http/middleware/tracer.go`), which extracts the W3C TraceContext from the inbound request. When there is a trace context, the request-logging middleware records the trace ID inside the `message` object of HTTP request logs (alongside `span_id`, `method`, `uri`, etc.). The top-level `trace_id` envelope field is also populated when the call site supplies a trace context; it is omitted on log lines without one (such as startup messages). In practice this means: pivot from a trace in Jaeger/Tempo to logs by querying for the trace ID, but configure your shipper to extract it from the nested `message` for request logs (see the Promtail snippet below) so the field is searchable regardless of which path populated it. ## Aggregation patterns ### Loki + Promtail (any Kubernetes) Promtail tails container stdout and ships to Loki. Because GoFr already emits JSON, use Promtail's `json` pipeline stage to extract `level` and `trace_id`. Note that for HTTP request logs the `trace_id` lives inside the nested `message` object, so extract from both locations: ```yaml pipeline_stages: - json: expressions: level: level trace_id: trace_id # populated on lines with a top-level trace_id message: message - json: source: message # parse the nested RequestLog object when message is JSON expressions: nested_trace_id: trace_id - template: source: trace_id template: '{{ or .trace_id .nested_trace_id }}' - labels: level: ``` Avoid making `trace_id` a label (high cardinality). Keep it as a field and search via `|= "trace_id"` matches, or use Loki's `json` LogQL parser to filter at query time. ### CloudWatch Logs (EKS) The CloudWatch Logs agent (Fluent Bit on Fargate, `fluentd`/`fluent-bit` on managed nodes) ships container stdout. Configure the parser as `json` so the structured fields become CloudWatch Logs Insights columns. Query with: ```text fields @timestamp, level, trace_id, message | filter level = "ERROR" ``` ### Cloud Logging / Stackdriver (GKE) GKE forwards container stdout automatically. Map `level` → `severity` so the Cloud Logging UI colors entries correctly. The container needs no changes; configure the agent in the cluster. ## Volume control Logging is cheap until it is not. Practical defaults: - Run production at `INFO`. Drop to `DEBUG` only via the remote log-level mechanism, scoped to a single service. - Rate-limit hot paths in your own code: log a sample (1 in N) for routine 200s. - Redact PII before it leaves the process. Do not rely on the aggregator to scrub. ## Secrets redaction If a secret might appear in a log line (rare for GoFr's auth middleware, which uses `subtle.ConstantTimeCompare` for credentials), redact at the application level before logging. Sidecars that scrub regexes after the fact are a fallback, not a primary control. ## Multi-line tracebacks Go's panic stack traces are multi-line. Use a parser that joins continuation lines (Promtail's `multiline` stage, Fluent Bit's `Multiline_Parser go`) so a panic shows up as a single event. ## Probe noise `/.well-known/alive` and `/.well-known/health` get hit several times per second by the kubelet. They will dominate access logs if you log every request. Configure your platform to suppress probe spam, or use sampled logging on those routes. {% faq %} {% faq-item question="What format does GoFr log in production?" %} JSON, one object per line, on stdout. Top-level fields are `level`, `time`, `message`, `trace_id` (omitempty), and `gofrVersion`. For HTTP request logs `message` is a nested object containing `trace_id`, `span_id`, `method`, `uri`, `response`, etc. The TTY path produces colored text only when stdout is a terminal. {% /faq-item %} {% faq-item question="How do I correlate a log line with a trace?" %} Use the W3C trace ID. It appears at the top level when the call site has a trace context, and inside the nested `message` object on every HTTP request log. Configure your shipper to extract it from both spots so the field is searchable regardless of source. {% /faq-item %} {% faq-item question="Can I change the log level without redeploying?" %} Yes. Set `REMOTE_LOG_URL` to an endpoint that returns the desired level and GoFr will pick up the change at the configured fetch interval. See the Remote Log Level Change guide. {% /faq-item %} {% /faq %} --- ## https://gofr.dev/docs/guides/production-prometheus-kubernetes # GoFr Prometheus on Kubernetes {% answer %} GoFr exposes Prometheus metrics on a separate port (`METRICS_PORT`, default `2121`) at `/metrics`. In Kubernetes, scrape it either with legacy `prometheus.io/*` pod annotations or — preferred — a `ServiceMonitor` from kube-prometheus-stack. Build alerts on the four golden signals (latency, errors, traffic, saturation) using the metrics GoFr emits by default plus any custom counters and histograms you register. {% /answer %} ## When to use this guide You have GoFr running in Kubernetes (see {% new-tab-link newtab=false title="Deploying to Kubernetes" href="/docs/guides/deploying-to-kubernetes" /%}) and either kube-prometheus-stack or a Prometheus instance scraping the cluster. This page covers the *operational* side — scraping, alerting, dashboards. For instrumenting code, see {% new-tab-link newtab=false title="Publishing Custom Metrics" href="/docs/advanced-guide/publishing-custom-metrics" /%}. ## What `/metrics` looks like GoFr starts a separate HTTP server on `METRICS_PORT` (default `2121`) that serves Prometheus-format metrics at `/metrics`. Setting `METRICS_PORT=0` disables the server entirely — useful for short-lived CLI commands. A truncated sample (label sets and HELP strings match the framework's actual output as of the current `pkg/gofr/container/container.go` registrations): ```text # HELP app_http_response Response time of HTTP requests in seconds. # TYPE app_http_response histogram app_http_response_bucket{path="/orders",method="GET",status="200",le="0.005"} 412 app_http_response_bucket{path="/orders",method="GET",status="200",le="0.01"} 580 app_http_response_bucket{path="/orders",method="GET",status="200",le="+Inf"} 612 app_http_response_sum{path="/orders",method="GET",status="200"} 4.21 app_http_response_count{path="/orders",method="GET",status="200"} 612 # HELP app_sql_open_connections Number of open SQL connections. # TYPE app_sql_open_connections gauge app_sql_open_connections 4 # HELP transaction_success used to track the count of successful transactions # TYPE transaction_success counter transaction_success_total 87 ``` Default metric names are stable (`app_http_response`, `app_sql_*`, `app_redis_*`, etc.). The OpenTelemetry-to-Prometheus exporter additionally adds `otel_scope_*` labels to every series. Custom metrics you register via `app.Metrics().NewCounter(...)` appear with the name and labels you supplied. To see the live label set against your own service, run `curl http://localhost:2121/metrics` and inspect the output directly. ## Option 1: pod annotations (older clusters / vanilla Prometheus) If you run a single Prometheus that uses kubernetes_sd with the legacy annotation pattern, add these to your Deployment's pod template: ```yaml spec: template: metadata: labels: app.kubernetes.io/name: orders annotations: prometheus.io/scrape: "true" prometheus.io/port: "2121" prometheus.io/path: "/metrics" ``` These annotations only work if your Prometheus's `scrape_configs` actually relabels off them — they are a convention, not a Kubernetes feature. kube-prometheus-stack ignores them by default, which is why ServiceMonitor exists. ## Option 2: ServiceMonitor (kube-prometheus-stack — preferred) kube-prometheus-stack ships the Prometheus Operator, which discovers scrape targets via `ServiceMonitor` and `PodMonitor` CRDs. Assuming the Service from the deployment guide names its metrics port `metrics`: ```yaml apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: orders namespace: default labels: # Must match the label kube-prometheus-stack's Prometheus selects on. # Default helm value is `release: `. release: kube-prometheus-stack spec: selector: matchLabels: app.kubernetes.io/name: orders namespaceSelector: matchNames: - default endpoints: - port: metrics path: /metrics interval: 30s scrapeTimeout: 10s honorLabels: true ``` Two gotchas worth knowing in advance: - **The `release` label is not magic.** The Prometheus CR selects ServiceMonitors via `serviceMonitorSelector`. Inspect your install with `kubectl get prometheus -A -o yaml` and use whichever label that selector requires. - **`namespaceSelector`** must include the namespace the Service lives in — otherwise the operator silently ignores the ServiceMonitor. ## Recording rules and alerts (golden signals) The four golden signals from the SRE book — latency, traffic, errors, saturation — map cleanly onto GoFr's defaults: ```yaml apiVersion: monitoring.coreos.com/v1 kind: PrometheusRule metadata: name: orders-rules namespace: default labels: release: kube-prometheus-stack spec: groups: - name: orders.recording interval: 30s rules: # Traffic: requests per second. - record: orders:http_requests:rate1m expr: sum by (path, method, status) (rate(app_http_response_count{job="orders"}[1m])) # Errors: 5xx as a fraction of total. - record: orders:http_5xx_ratio:rate5m expr: | sum by (path) (rate(app_http_response_count{job="orders",status=~"5.."}[5m])) / sum by (path) (rate(app_http_response_count{job="orders"}[5m])) # Latency: p95 in seconds. - record: orders:http_p95_seconds:rate5m expr: | histogram_quantile( 0.95, sum by (le, path) (rate(app_http_response_bucket{job="orders"}[5m])) ) - name: orders.alerts rules: - alert: OrdersHighErrorRate expr: orders:http_5xx_ratio:rate5m > 0.05 for: 10m labels: severity: page annotations: summary: "orders 5xx ratio > 5% on {{ $labels.path }}" description: "5xx ratio is {{ $value | humanizePercentage }} for the last 10m." - alert: OrdersHighLatency expr: orders:http_p95_seconds:rate5m > 0.5 for: 10m labels: severity: ticket annotations: summary: "orders p95 latency > 500ms on {{ $labels.path }}" - alert: OrdersSaturationCPU expr: | sum by (pod) (rate(container_cpu_usage_seconds_total{namespace="default",pod=~"orders-.*"}[5m])) / sum by (pod) (kube_pod_container_resource_limits{namespace="default",pod=~"orders-.*",resource="cpu"}) > 0.85 for: 15m labels: severity: ticket annotations: summary: "orders pod {{ $labels.pod }} CPU > 85% of limit" - alert: OrdersDown expr: up{job="orders"} == 0 for: 2m labels: severity: page annotations: summary: "Prometheus cannot scrape orders" ``` Thresholds (5% errors, 500ms p95, 85% CPU saturation) are starting points — calibrate against your actual traffic pattern before paging on them. ## Dashboards Don't ship hand-rolled dashboards if a community one will do. Good starting points: - **Go runtime:** search the [Grafana dashboard library](https://grafana.com/grafana/dashboards/?dataSource=prometheus&search=go) for an OpenTelemetry / `go_*` runtime dashboard covering GC, goroutines, and heap. - **Kubernetes pod resources:** kube-prometheus-stack ships `kubernetes-mixin` dashboards out of the box. - **HTTP RED method:** any RED-method dashboard (rate / errors / duration) works against `app_http_response_*`. For application-specific dashboards, build one panel per custom metric you register. Use the same labels in panels that you use in alerts to keep cardinality predictable. ## Exemplars: linking metrics to traces If your Prometheus is built with exemplar support and the OpenTelemetry Collector is configured to attach exemplars to histogram buckets (via the OTLP `exemplars` feature in the SDK), you can click from a slow `histogram_quantile` panel in Grafana directly to the trace in Tempo or Jaeger. Wiring this end-to-end requires: 1. GoFr exporting traces to a Collector (see {% new-tab-link newtab=false title="Production Tracing" href="/docs/guides/production-tracing" /%}). 2. The Collector forwarding metrics + exemplars to Prometheus. 3. Grafana with the trace datasource correlated to the metrics datasource. GoFr's HTTP histogram (`app_http_response`) records under a span, so when exemplar emission is enabled in the pipeline the `traceID` rides along with the histogram observation. ## Production tips - **Cardinality first.** A counter labeled by `user_id` will explode your Prometheus. Stick to bounded labels — `path`, `method`, `status`, `endpoint`. See the cardinality note in {% new-tab-link newtab=false title="Publishing Custom Metrics" href="/docs/advanced-guide/publishing-custom-metrics" /%}. - **NetworkPolicy on `2121`.** Only allow Prometheus pods to reach the metrics port. There's no auth on `/metrics` by design, and there shouldn't be — keep it network-isolated instead. - **`honorLabels: true`** prevents Prometheus from overwriting labels your app sets (e.g., when you've used `instance` as a custom label). - **Scrape interval == alert window divisor.** If you scrape every 30s, don't write `rate(...[10s])` — you'll get NaNs. - **Disable metrics in CLI mode.** For one-shot CLI commands using the same binary, set `METRICS_PORT=0` so you don't bind a server you don't need. ## Verification ```bash # Check the metrics endpoint directly. kubectl port-forward svc/orders 2121:2121 curl -s http://localhost:2121/metrics | grep -E "^app_http_response|^transaction_success" # Confirm Prometheus picked up the target. kubectl port-forward -n monitoring svc/kube-prometheus-stack-prometheus 9090:9090 # Open http://localhost:9090/targets — search for "orders". # Run a query. curl -s 'http://localhost:9090/api/v1/query?query=up{job="orders"}' # Confirm rules loaded. curl -s http://localhost:9090/api/v1/rules | jq '.data.groups[].name' ``` {% faq %} {% faq-item question="My ServiceMonitor is created but Prometheus doesn't scrape." %} Three checks in order. First, `kubectl get servicemonitor orders -o yaml` and confirm the `labels` match what your Prometheus CR's `serviceMonitorSelector` expects (often `release: `). Second, `namespaceSelector.matchNames` must include the Service's namespace, or use `any: true`. Third, the Service's port must have a `name` (not just a number) and the ServiceMonitor's `endpoints[].port` must reference that name. {% /faq-item %} {% faq-item question="Should I use ServiceMonitor or PodMonitor?" %} Use `ServiceMonitor` when there's already a Service for your app — that's the common case, and it's the right level of indirection. Use `PodMonitor` for headless workloads or when you want to scrape every pod independently (e.g., per-pod custom counters that aren't aggregated through the Service). {% /faq-item %} {% faq-item question="How do I expose only `/metrics` and not `/.well-known/health` to Prometheus?" %} GoFr already runs them on different ports — `2121` (metrics) vs `8000` (HTTP including `/.well-known/*`). Apply a NetworkPolicy that allows Prometheus to reach `2121` only, and keep ingress traffic on `8000`. {% /faq-item %} {% /faq %} --- ## https://gofr.dev/docs/guides/production-tracing # Production Tracing for GoFr {% answer %} GoFr ships built-in OpenTelemetry tracing — every HTTP request, gRPC call, and datasource operation is traced automatically. Configure the exporter via `TRACE_EXPORTER` (`otlp`, `jaeger`, `zipkin`, or `gofr`) and `TRACER_URL`, set `TRACER_RATIO` for head-based sampling, and W3C Trace Context propagation flows through GoFr's HTTP service client without extra code. {% /answer %} {% howto name="Wire production tracing for a GoFr service" description="Configure OTLP gRPC tracing in GoFr, point it at Jaeger / Tempo / Honeycomb, and tune sampling for production." steps=[{"name": "Set TRACE_EXPORTER", "text": "Set TRACE_EXPORTER=otlp in configs/.env (or an env-based ConfigMap in K8s) — GoFr ships an OTLP gRPC exporter."}, {"name": "Set TRACER_URL", "text": "Set TRACER_URL to a bare host:port (no http:// scheme) on port 4317 for OTLP gRPC; route to Jaeger collector, Tempo, or any OTLP backend."}, {"name": "Tune TRACER_RATIO", "text": "Set TRACER_RATIO to 1.0 in dev for full sampling; in prod step down to 0.1 or lower based on volume."}, {"name": "Add custom spans", "text": "Use ctx.Trace(name) inside handlers to mark sub-operations; existing HTTP, gRPC, and datasource spans are emitted automatically."}, {"name": "Verify in the backend", "text": "Hit a route, then open the Jaeger UI / Grafana Tempo and search for the service by APP_NAME — confirm spans show up with trace_id."}, {"name": "Propagate across services", "text": "GoFr injects W3C TraceContext on outbound calls via ctx.GetHTTPService — so two GoFr services share a single trace ID end to end."}] /%} ## When to use this guide You have GoFr running in Kubernetes (or any container platform) and want traces flowing into a backend — Jaeger, Grafana Tempo, an OpenTelemetry Collector, or a vendor that accepts OTLP. This guide covers exporter configuration, sampling, and propagation across multiple services. For adding application-level spans inside handlers, see {% new-tab-link newtab=false title="Custom Spans In Tracing" href="/docs/advanced-guide/custom-spans-in-tracing" /%}. ## What GoFr traces automatically Once tracing is enabled, GoFr instruments without code changes: - **HTTP server** — every incoming request becomes a root span (or a child if upstream sent W3C trace headers). - **HTTP client** — outgoing calls via the GoFr service client (with circuit breaker / retry / rate limit) are traced and propagate context. - **gRPC** — server and client interceptors emit spans. - **Datasources** — SQL, Redis, Mongo, Cassandra, Pub/Sub publishers and subscribers (Kafka, NATS, SQS, Google Pub/Sub) emit spans for each operation. - **Migrations** — recorded as spans, useful for debugging long-running schema changes. What custom spans add (`ctx.Trace("name")`) is application logic — business operations that span multiple datasource calls or pure-CPU work you want to time. ## Configuration GoFr reads tracing config from environment variables. The relevant keys (verified against `pkg/gofr/otel.go`): | Variable | Purpose | Default | |---|---|---| | `TRACE_EXPORTER` | One of `otlp`, `jaeger`, `zipkin`, `gofr` | unset (tracing disabled) | | `TRACER_URL` | Endpoint for the chosen exporter | unset | | `TRACER_HOST` | **Deprecated** — use `TRACER_URL` | unset | | `TRACER_PORT` | **Deprecated** — use `TRACER_URL` | `9411` | | `TRACER_RATIO` | Head-based sampling ratio (0.0–1.0) | `1` | | `TRACER_HEADERS` | Custom OTLP headers, `Key1=Value1,Key2=Value2` | unset | | `TRACER_AUTH_KEY` | Shortcut for `Authorization` header | unset | Tracing is **disabled** if neither `TRACE_EXPORTER` nor `TRACER_URL` is set — GoFr logs `tracing is disabled, as configs are not provided` at debug level. The sampler is `ParentBased(TraceIDRatioBased(TRACER_RATIO))`, so a sampling decision made upstream is honored. `zipkin` is supported but deprecated; the framework logs a warning recommending `otlp` instead. The `gofr` exporter ships traces to GoFr's hosted tracer at `https://tracer-api.gofr.dev/api/spans` (override with `TRACER_URL`). ## Backend recipes ### Jaeger (OTLP gRPC) Modern Jaeger (1.35+) accepts OTLP natively on port `4317`: ```yaml # ConfigMap fragment TRACE_EXPORTER: "jaeger" TRACER_URL: "jaeger-collector.observability.svc.cluster.local:4317" TRACER_RATIO: "0.1" ``` `jaeger` and `otlp` use the same OTLP gRPC exporter under the hood — they differ only in log labeling. ### Grafana Tempo / OpenTelemetry Collector Point at any OTLP gRPC endpoint: ```yaml TRACE_EXPORTER: "otlp" TRACER_URL: "otel-collector.observability.svc.cluster.local:4317" TRACER_RATIO: "0.1" ``` Running an OTel Collector as a sidecar or DaemonSet is the recommended pattern: it does tail-based sampling, batching, and can fan out to multiple backends without changing the app. ### Honeycomb / Datadog / Vendor OTLP For SaaS backends that accept OTLP and require an API key: ```yaml TRACE_EXPORTER: "otlp" TRACER_URL: "api.honeycomb.io:443" TRACER_HEADERS: "x-honeycomb-team=YOUR_API_KEY,x-honeycomb-dataset=orders" TRACER_RATIO: "0.1" ``` Or with a single auth header: ```yaml TRACER_AUTH_KEY: "Bearer YOUR_TOKEN" ``` GoFr's OTLP exporter currently uses an insecure (cleartext) gRPC connection inside the cluster — for SaaS endpoints over the public internet, route through an OTel Collector that terminates TLS, or rely on a service mesh. ## Sampling: head-based vs tail-based `TRACER_RATIO` is **head-based**: the sampling decision is made when the trace starts. With `TRACER_RATIO=0.1`, 10% of root spans are kept; the other 90% are dropped at the source. Cheap, predictable, but you cannot retroactively keep a slow or errored trace that wasn't sampled. For production-grade observability, **tail-based** sampling — done in an OpenTelemetry Collector with the `tail_sampling` processor — lets you keep all traces that contain errors or exceed a latency threshold while sub-sampling the happy path. The pattern is: app sends 100% (or a high ratio) to the local collector; collector decides what to ship onward. A starting matrix: | Environment | `TRACER_RATIO` | Notes | |---|---|---| | Local dev | `1` | See everything | | Staging | `1` | Catch issues before prod | | Production (low traffic, < 50 RPS) | `1` | Volume is fine | | Production (high traffic) | `0.05`–`0.1` | Or sample 100% to a collector and tail-sample there | ## Propagation across services GoFr sets up a `CompositeTextMapPropagator(TraceContext{}, Baggage{})`, so the W3C `traceparent` and `baggage` headers are honored on incoming requests and written on outgoing requests through the GoFr HTTP service client. No extra code is needed: ```go package main import ( "encoding/json" "gofr.dev/pkg/gofr" ) func main() { app := gofr.New() app.AddHTTPService("payments", "http://payments.default.svc.cluster.local") app.GET("/checkout", func(ctx *gofr.Context) (any, error) { span := ctx.Trace("checkout.compute-total") defer span.End() // The downstream span on payments will be a child of this trace. // GetWithHeaders takes (ctx, path, queryParams, headers) and returns (*http.Response, error). httpResp, err := ctx.GetHTTPService("payments"). GetWithHeaders(ctx, "/charge", nil, nil) if err != nil { return nil, err } defer httpResp.Body.Close() var resp any if err := json.NewDecoder(httpResp.Body).Decode(&resp); err != nil { return nil, err } return resp, nil }) app.Run() } ``` The downstream `payments` service — also a GoFr app pointed at the same exporter — will record its spans as children of the same trace. In Jaeger or Tempo, you'll see the full chain end-to-end. ## Production tips - **One exporter, many services:** point all your services at the same collector. Querying a trace that hops services is the whole point. - **Resource attributes:** GoFr sets `service.name` from `APP_NAME` (default `gofr-app`). Set `APP_NAME` per-deployment so traces are attributable. - **Don't sample on the client when you can sample on the collector** — once dropped at the source, a trace is gone forever. - **Watch the exporter error log:** GoFr installs a custom OTel error handler (`otelErrorHandler`) that logs exporter failures via the standard logger. If you see these in volume, your collector is unreachable or overwhelmed. - **Trace IDs in logs:** include the trace ID in your logs to jump from a noisy log line to its trace. GoFr's structured logger and trace context share `*gofr.Context`, so you can read `span.SpanContext().TraceID()` and log it. ## Verification ```bash # 1. Confirm env is set inside the pod. kubectl exec deploy/orders -- env | grep -E "TRACE_|TRACER_" # 2. Generate traffic. kubectl port-forward svc/orders 8080:80 for i in $(seq 1 50); do curl -s http://localhost:8080/checkout > /dev/null; done # 3. Confirm spans are flowing in the collector or backend logs. kubectl logs -n observability deploy/otel-collector | grep -i orders # 4. Open Jaeger UI and search service=orders. kubectl port-forward -n observability svc/jaeger-query 16686:16686 # http://localhost:16686 ``` {% faq %} {% faq-item question="Tracing is configured but I see no spans in the backend." %} Check three things in order. First, GoFr logs `Exporting traces to at ` on startup — if absent, the exporter never initialized; verify `TRACE_EXPORTER` is one of `otlp`, `jaeger`, `zipkin`, or `gofr`. Second, port-forward to the collector and confirm gRPC `4317` is reachable from the pod. Third, check `TRACER_RATIO` — `0` would silently drop everything. {% /faq-item %} {% faq-item question="Why are my downstream service's spans showing up as separate traces?" %} The downstream call must go through GoFr's HTTP service client (`app.AddHTTPService` + `ctx.GetHTTPService`). A raw `http.Client` will not inject the `traceparent` header. If you must use a custom client, wrap its transport with `otelhttp.NewTransport`. {% /faq-item %} {% faq-item question="Do I need a Collector, or can I send directly to Jaeger/vendor?" %} You can send directly — GoFr's OTLP exporter speaks OTLP gRPC to anything that accepts it. A Collector becomes worth it when you want tail-based sampling, batching across many services, or to swap backends without redeploying every service. {% /faq-item %} {% /faq %} --- ## https://gofr.dev/docs/guides/service-mesh-integration # Service Mesh Integration {% answer %} GoFr services run unchanged on Istio or Linkerd because the framework speaks plain HTTP/gRPC. The mesh adds mTLS, traffic policy, and L7 telemetry through a sidecar — but you should pick one owner for retries and circuit breaking, since GoFr's HTTP client already provides both. {% /answer %} ## When a mesh helps versus library-level resilience GoFr already ships several patterns commonly cited as reasons to adopt a mesh: - Service-to-service HTTP client with [Circuit Breaker](/docs/advanced-guide/circuit-breaker), retry, and rate-limit options on `AddHTTPService`. - W3C TraceContext propagation for outbound calls (verified in `pkg/gofr/service/new.go`). - Health endpoints `/.well-known/health` and `/.well-known/alive` for readiness/liveness probes. A mesh becomes worth its sidecar overhead when you need: - mTLS between every pod without code changes. - Traffic shifting / canary by percentage at L7. - Mesh-wide policy (deny-all by default, then allowlist). - A consistent telemetry plane across services written in different languages. If your fleet is GoFr-only and you mainly want resilience, GoFr's built-in features may be enough. ## mTLS without code changes In Istio, apply a `PeerAuthentication` policy in `STRICT` mode and a `DestinationRule` with `tls.mode: ISTIO_MUTUAL`. GoFr requires no change — the sidecar transparently terminates and re-encrypts traffic. In Linkerd, mTLS is automatic between meshed pods. Annotate the namespace with `linkerd.io/inject: enabled` and redeploy. For the exact CRD syntax, follow the canonical docs: - Istio: `https://istio.io/latest/docs/tasks/security/authentication/` - Linkerd: `https://linkerd.io/2/features/automatic-mtls/` ## Tracing: mesh spans on top of GoFr's GoFr emits W3C TraceContext (`traceparent`, `tracestate`) on inbound requests and propagates them on outbound HTTP service calls. When you add a mesh: - Istio injects its own server/client spans wrapping GoFr's spans, giving you network-layer timing alongside your application spans. - Both stacks must agree on the propagator. GoFr uses `propagation.TraceContext` + `Baggage` (see `pkg/gofr/otel.go`), which matches the W3C standard Istio and Linkerd use. - Configure your mesh's tracer to send to the same backend (Jaeger, Tempo, OTLP collector) you point GoFr at via `TRACE_EXPORTER` and `TRACER_URL`. ## Retries and circuit breaker: pick one owner This is where teams burn themselves. If both GoFr and the mesh retry, a 503 on a downstream service can multiply into 9+ retries (3 from GoFr times 3 from the mesh). Recommendation: **own resilience in one layer, not both.** - If you want consistent behavior across HTTP and gRPC and you are already using GoFr's `AddHTTPService` with `CircuitBreakerConfig` and `RetryConfig`: turn off mesh-level retries and outlier detection for those routes. - If you want polyglot uniformity: rely on the mesh, and call `AddHTTPService` without retry/circuit-breaker options. GoFr's circuit breaker uses `/.well-known/alive` to probe recovery. If you delegate to the mesh, the mesh's outlier detection plays the same role. ## Sidecar overhead A sidecar adds CPU, memory, and ~1–3ms of latency per hop. For a low-QPS internal service the overhead is usually fine; for a hot path with strict latency budgets, benchmark before adopting. GoFr's library-level resilience has no sidecar cost. ## Probes still go to GoFr Set Kubernetes probes on the GoFr ports, not the sidecar: ```yaml livenessProbe: httpGet: path: /.well-known/alive port: 8000 readinessProbe: httpGet: path: /.well-known/health port: 8000 ``` `/.well-known/alive` is the liveness signal; `/.well-known/health` includes dependency status and may be slower. {% faq %} {% faq-item question="Do I need to change GoFr code to enable mTLS via Istio or Linkerd?" %} No. The sidecar handles TLS at the network layer, so a plain HTTP listener inside the pod is fine. You only change Kubernetes manifests. {% /faq-item %} {% faq-item question="Should the mesh or GoFr own retries?" %} Pick one. Running both layers at default settings can multiply request volume on a struggling downstream. If you keep GoFr's `RetryConfig`, disable mesh retries for those routes. {% /faq-item %} {% faq-item question="Will mesh-injected spans break GoFr tracing?" %} No. GoFr uses W3C TraceContext, the same standard Istio and Linkerd use, so spans stitch together if both export to the same collector. {% /faq-item %} {% /faq %} --- ## https://gofr.dev/docs/guides/twelve-factor-config # Twelve-Factor Config in GoFr {% answer %} GoFr's `config.Config` interface reads from process environment variables and `.env` files in the `configs/` directory, with system env vars taking precedence over file values. In Kubernetes, ship the same binary across environments and inject configuration through `envFrom` referencing a `ConfigMap` (non-secret) and a `Secret` (credentials), keeping secrets out of source control. {% /answer %} ## When to use Twelve-factor config matters whenever the same artifact runs in more than one place — local laptop, CI, staging, production. GoFr is designed around this from the start: the framework itself is configured by env vars (`HTTP_PORT`, `DB_DIALECT`, `LOG_LEVEL`, etc.), and `app.Config.Get(...)` exposes the same surface to your application code. ## How GoFr loads config The default loader is `config.NewEnvFile(configFolder, logger)` and the precedence is: 1. **System environment variables** — values present in `os.Environ()` *before* the app starts win. 2. **`configs/.env`** — base values for every environment. 3. **`configs/..env`** — overrides for the named env (e.g., `configs/.staging.env` when `APP_ENV=staging`). Falls back to `configs/.local.env` when `APP_ENV` is unset. The loader actually re-applies the captured initial environment after reading the override file, which is what guarantees system env > file. In a Kubernetes pod, every value injected via `env:` or `envFrom:` is a system env var and therefore beats anything baked into the `configs/` folder of the image. The `Config` interface itself is small: ```go type Config interface { Get(string) string GetOrDefault(string, string) string } ``` Use it from any handler or service: ```go threshold := app.Config.GetOrDefault("PAYMENT_RETRY_THRESHOLD", "3") ``` ## Local development with `.env` Keep a checked-in `configs/.env` with safe defaults and a gitignored `configs/.local.env` for personal overrides: ```dotenv # configs/.env APP_NAME=orders-api HTTP_PORT=8000 LOG_LEVEL=DEBUG DB_DIALECT=postgres DB_HOST=localhost DB_PORT=5432 DB_NAME=orders_dev ``` ```dotenv # configs/.local.env (gitignored) DB_PASSWORD=local-dev-password ``` When `APP_ENV` is unset GoFr loads `.env` then overlays `.local.env`. Set `APP_ENV=staging` and it overlays `.staging.env` instead. ## Kubernetes: ConfigMap + Secret In production, the `configs/` directory inside the image is largely empty (or only holds non-environmental files like a GraphQL schema). Everything environmental comes from Kubernetes: ```yaml apiVersion: v1 kind: ConfigMap metadata: name: orders-api-config namespace: prod data: APP_NAME: orders-api APP_ENV: prod HTTP_PORT: "8000" METRICS_PORT: "2121" LOG_LEVEL: INFO DB_DIALECT: postgres DB_HOST: postgres-primary.prod.svc.cluster.local DB_PORT: "5432" DB_NAME: orders DB_MAX_OPEN_CONNECTION: "20" DB_MAX_IDLE_CONNECTION: "5" TRACE_EXPORTER: otlp # GoFr's OTLP exporter speaks gRPC (otlptracegrpc). TRACER_URL must be a bare # host:port — no http:// scheme — and the OTLP gRPC port is 4317 (4318 is OTLP # HTTP, which GoFr does NOT use). TRACER_URL: otel-collector.observability.svc.cluster.local:4317 --- apiVersion: v1 kind: Secret metadata: name: orders-api-secret namespace: prod type: Opaque stringData: DB_USER: orders_app DB_PASSWORD: replace-me ``` Wire both into the Deployment with `envFrom` so every key becomes an env var without listing them individually: ```yaml spec: template: spec: containers: - name: api image: ghcr.io/example/orders-api:1.4.2 envFrom: - configMapRef: name: orders-api-config - secretRef: name: orders-api-secret ports: - name: http containerPort: 8000 - name: metrics containerPort: 2121 ``` If both the ConfigMap and Secret define the same key, the *later* `envFrom` entry wins — list the Secret last for credentials that must override defaults. ## Secret management Don't commit `Secret` manifests with real values to Git. Two well-supported options: - {% new-tab-link newtab=true title="Sealed Secrets" href="https://sealed-secrets.netlify.app/" /%} — encrypt the Secret manifest with a controller-held key; safe to commit. - {% new-tab-link newtab=true title="External Secrets Operator" href="https://external-secrets.io/" /%} — sync from Vault, AWS Secrets Manager, GCP Secret Manager, etc. GoFr does not need to know which one you use; both materialize a normal `Secret` that `envFrom` consumes. ## When to use the `configs/` folder vs env Use **env vars** for anything that varies by environment: hostnames, ports, log levels, feature flags, credentials. Use the **`configs/` folder** for static assets the binary needs at runtime: a GraphQL `schema.graphql`, an OpenAPI `openapi.json` (which GoFr auto-mounts as Swagger UI when present), or a fixed routing table. Bake these into the image — they don't change between staging and prod. ## Anti-patterns - Hardcoded URLs (`"http://payments.internal"`) — breaks the moment staging needs a different host. - Secrets committed to Git, even in a private repo — they leak via clones, CI artifacts, and IDE history. - Reading `os.Getenv` directly in handlers — use `app.Config.Get` so tests can substitute a mock `Config`. - One ConfigMap that mixes secrets with non-secrets — defeats the point of using a Secret resource for RBAC and audit. {% faq %} {% faq-item question="What is the precedence between .env files and process environment in GoFr?" %} System environment variables always win. GoFr captures `os.Environ()` before loading files and re-applies it after `godotenv.Overload`, so anything injected by Kubernetes overrides what's in `configs/.env` and `configs/..env`. {% /faq-item %} {% faq-item question="Where do I set APP_ENV?" %} Anywhere your platform supplies env vars: a `ConfigMap` key in Kubernetes, the shell in CI, or `configs/.local.env` for development. GoFr reads it on startup to pick the override file. {% /faq-item %} {% faq-item question="Can I use Vault or AWS Secrets Manager?" %} Yes, indirectly. Use External Secrets Operator (or a sidecar) to materialize a Kubernetes `Secret`, then reference it with `envFrom`. GoFr only sees env vars and doesn't care about the source. {% /faq-item %} {% /faq %} --- # References ## https://gofr.dev/docs/references/configs # GoFr Configuration Options This document lists all the configuration options supported by the GoFr framework. The configurations are grouped by category for better organization. ## App {% table %} - Name - Description - Default Value --- - APP_NAME - Name of the application - gofr-app --- - APP_ENV - Name of the environment file to use (e.g., stage.env, prod.env, or local.env). --- - APP_VERSION - Application version - dev --- - LOG_LEVEL - Level of verbosity for application logs. Supported values are **DEBUG, INFO, NOTICE, WARN, ERROR, FATAL** - INFO --- - REMOTE_LOG_URL - URL to remotely change the log level --- - REMOTE_LOG_FETCH_INTERVAL - Time interval (in seconds) to check for remote log level updates - 15 --- - METRICS_PORT - Port on which the application exposes metrics - 2121 --- - HTTP_PORT - Port on which the HTTP server listens - 8000 --- - GRPC_PORT - Port on which the gRPC server listens - 9000 --- - TRACE_EXPORTER - Tracing exporter to use. Supported values: gofr, zipkin, jaeger, otlp. --- - TRACER_HOST - Hostname of the tracing collector. Required if TRACE_EXPORTER is set to zipkin or jaeger. - **DEPRECATED** --- - TRACER_PORT - Port of the tracing collector. Required if TRACE_EXPORTER is set to zipkin or jaeger. - 9411 - **DEPRECATED** --- - TRACER_URL - URL of the trace collector. Required if TRACE_EXPORTER is set to zipkin or jaeger. --- - TRACER_RATIO - Refers to the proportion of traces that are exported through sampling. It is optional configuration. By default, this ratio is set to 1. --- - TRACER_AUTH_KEY - Authorization header for trace exporter requests. Supported for zipkin, jaeger, otlp. --- - TRACER_HEADERS - Custom authentication headers for trace exporter requests in comma-separated key=value format (e.g., "X-Api-Key=secret,Authorization=Bearer token"). Supported for zipkin, jaeger, otlp. Takes priority over TRACER_AUTH_KEY. --- - CMD_LOGS_FILE - File to save the logs in case of a CMD application --- - SHUTDOWN_GRACE_PERIOD - Timeout duration for server shutdown process - 30s --- - GOFR_TELEMETRY - Enable telemetry for GoFr framework usage - true --- - LOG_DISABLE_PROBES - Disable log probes for health checks - false --- - GRPC_ENABLE_REFLECTION - Enable gRPC server reflection - false {% /table %} ## HTTP {% table %} - Name - Description --- - REQUEST_TIMEOUT - Set the request timeouts (in seconds) for HTTP server. --- - CERT_FILE - Set the path to your PEM certificate file for the HTTPS server to establish a secure connection. --- - KEY_FILE - Set the path to your PEM key file for the HTTPS server to establish a secure connection. {% /table %} ### CORS {% table %} - Name - Description - Default Value --- - ACCESS_CONTROL_ALLOW_ORIGIN - Allowed origin(s) for cross-origin requests. Supports comma-separated values for multiple origins (e.g., `https://app.example.com,https://admin.example.com`). - `*` --- - ACCESS_CONTROL_ALLOW_HEADERS - Allowed request headers for cross-origin requests. - Authorization, Content-Type, x-requested-with, origin, true-client-ip, X-Correlation-ID --- - ACCESS_CONTROL_ALLOW_METHODS - Allowed HTTP methods. Automatically set from registered routes if not provided. --- - ACCESS_CONTROL_ALLOW_CREDENTIALS - Allow credentials (cookies, HTTP authentication) in cross-origin requests. --- - ACCESS_CONTROL_EXPOSE_HEADERS - Additional headers exposed to the client in cross-origin responses. --- - ACCESS_CONTROL_MAX_AGE - Maximum time (in seconds) browsers can cache preflight responses. {% /table %} ## Datasource ### SQL {% table %} - Name - Description - Default Value --- - DB_DIALECT - Database dialect. Supported values: mysql, postgres, supabase --- - DB_HOST - Hostname of the database server. --- - DB_PORT - Port of the database server. - 3306 --- - DB_USER - Username for the database. --- - DB_PASSWORD - Password for the database. --- - DB_NAME - Name of the database to use. --- - DB_MAX_IDLE_CONNECTION - Number of maximum idle connection. - 2 --- - DB_MAX_OPEN_CONNECTION - Number of maximum connections which can be used with database. - 0 (unlimited) --- - DB_SSL_MODE - TLS/SSL mode for database connections. Supported modes: **disable** (no TLS), **preferred** (attempts TLS, falls back to plain), **require** (enforces TLS, skips validation), **skip-verify** (enforces TLS, no certificate validation), **verify-ca** (enforces TLS, validates certificate against CA), **verify-full** (enforces TLS with full validation including hostname). Currently supported for MySQL/MariaDB and PostgreSQL. - disable --- - DB_TLS_CA_CERT - Path to CA certificate file for TLS connections. Required for **verify-ca** and **verify-full** SSL modes. - None --- - DB_TLS_CLIENT_CERT - Path to client certificate file for mutual TLS authentication. - None --- - DB_TLS_CLIENT_KEY - Path to client private key file for mutual TLS authentication. - None --- - DB_REPLICA_HOSTS - Comma-separated list of replica database hosts. Used for read replicas. - None --- - DB_REPLICA_PORTS - Comma-separated list of replica database ports. Used for read replicas. - None --- - DB_REPLICA_USERS - Comma-separated list of replica database users. Used for read replicas. - None --- - DB_REPLICA_PASSWORDS - Comma-separated list of replica database passwords. Used for read replicas. - None --- - DB_REPLICA_MAX_IDLE_CONNECTIONS - Maximum idle connections allowed for a replica - 50 --- - DB_REPLICA_MIN_IDLE_CONNECTIONS - Minimum idle connections for a replica - 10 --- - DB_REPLICA_DEFAULT_IDLE_CONNECTIONS - Idle connections used if no primary setting is provided - 10 --- - DB_REPLICA_MAX_OPEN_CONNECTIONS - Maximum open connections allowed for a replica - 200 --- - DB_REPLICA_MIN_OPEN_CONNECTIONS - Minimum open connections for a replica - 50 --- - DB_REPLICA_DEFAULT_OPEN_CONNECTIONS - Open connections used if no primary setting is provided - 100 --- - DB_CHARSET - The character set for database connection - utf8 --- - SUPABASE_CONNECTION_TYPE - Connection type to Supabase. Supported values: direct, session, transaction - direct --- - SUPABASE_PROJECT_REF - Supabase project reference ID --- - SUPABASE_REGION - Supabase region for pooled connections --- - DB_URL - Full PostgreSQL connection string for Supabase (alternative to separate config parameters) {% /table %} ### Redis {% table %} - Name - Description - Default Value --- - REDIS_HOST - Hostname of the Redis server. - localhost --- - REDIS_PORT - Port of the Redis server. - 6379 --- - REDIS_USER - Username for the Redis server (optional). - "" --- - REDIS_PASSWORD - Password for the Redis server (optional). - "" --- - REDIS_DB - Database number to use for the Redis server. - 0 --- - REDIS_TLS_ENABLED - Enable TLS for Redis connections. - false --- - REDIS_TLS_CA_CERT - Path to the TLS CA certificate file for Redis (or PEM-encoded string). - "" --- - REDIS_TLS_CERT - Path to the TLS certificate file for Redis (or PEM-encoded string). - "" --- - REDIS_TLS_KEY - Path to the TLS key file for Redis (or PEM-encoded string). - "" {% /table %} **Redis PubSub Configuration:** {% table %} - Name - Description - Default Value --- - REDIS_PUBSUB_DB - Redis database number to use only for Redis Pub/Sub (when `PUBSUB_BACKEND=REDIS`). Use a different DB than `REDIS_DB` when running GoFr migrations with Redis Streams mode to avoid `gofr_migrations` key-type collisions. - Default: `15` (highest default Redis database, 0-15) --- - REDIS_PUBSUB_MODE - Operation mode: `pubsub` or `streams`. - streams --- - REDIS_STREAMS_CONSUMER_GROUP - Consumer group name (required for streams mode). - "" --- - REDIS_STREAMS_CONSUMER_NAME - Unique consumer name (optional, auto-generated if empty). - "" --- - REDIS_STREAMS_BLOCK_TIMEOUT - Blocking duration for reading new messages using Redis `XREADGROUP`. Lower values (1s-2s) provide faster detection but increase CPU usage. Higher values (10s-30s) reduce CPU usage, ideal for batch processing. - 5s --- - REDIS_STREAMS_PEL_RATIO - Ratio of PEL (pending) messages to read vs new messages (0.0-1.0). Controls balance between retry and fresh messages. 0.7 = 70% PEL, 30% new. - 0.7 --- - REDIS_STREAMS_MAXLEN - Maximum length of the stream (approximate). Prevents streams from growing indefinitely. Set to `0` for unlimited. - 0 (unlimited) {% /table %} > **Note**: When using GoFr migrations with Streams mode, keep `REDIS_DB` and `REDIS_PUBSUB_DB` separate (defaults: 0 and 15). For `REDIS_STREAMS_BLOCK_TIMEOUT`: use 1s-2s for real-time or 10s-30s for batch processing. ### Pub/Sub {% table %} - Name - Description - Default Value --- - PUBSUB_BACKEND - Pub/Sub message broker backend wired automatically by `gofr.New()`. Accepted values: `kafka`, `google`, `mqtt`, `redis` (case-insensitive). Other backends (NATS JetStream, AWS SQS, Azure Event Hub) are wired explicitly via `app.AddPubSub(...)`. - {% /table %} **Kafka** {% table %} - Name - Description - Default Value --- - PUBSUB_BROKER - Comma-separated list of broker addresses - localhost:9092 --- - PARTITION_SIZE - Size of each message partition (in bytes) - 0 --- - PUBSUB_OFFSET - Offset to start consuming messages from. -1 for earliest, 0 for latest. - -1 --- - KAFKA_BATCH_SIZE - Number of messages to batch before sending to Kafka - 1 --- - KAFKA_BATCH_BYTES - Number of bytes to batch before sending to Kafka - 1048576 --- - KAFKA_BATCH_TIMEOUT - Time to wait before sending a batch to Kafka - 100ms --- - CONSUMER_ID - Unique identifier for this consumer - gofr-consumer --- --- - KAFKA_SECURITY_PROTOCOL - Security protocol used to communicate with Kafka (e.g., PLAINTEXT, SSL, SASL_PLAINTEXT, SASL_SSL) - PLAINTEXT --- - KAFKA_SASL_MECHANISM - SASL mechanism for authentication (e.g. PLAIN, SCRAM-SHA-256, SCRAM-SHA-512) - None --- - KAFKA_SASL_USERNAME - Username for SASL authentication - None --- - KAFKA_SASL_PASSWORD - Password for SASL authentication - None --- - KAFKA_TLS_CERT_FILE - Path to the TLS certificate file - None --- - KAFKA_TLS_KEY_FILE - Path to the TLS key file - None --- - KAFKA_TLS_CA_CERT_FILE - Path to the TLS CA certificate file - None --- - KAFKA_TLS_INSECURE_SKIP_VERIFY - Skip TLS certificate verification - false {% /table %} **Google** {% table %} - Name - Description --- - GOOGLE_PROJECT_ID - ID of the Google Cloud project. Required for Google Pub/Sub. --- - GOOGLE_SUBSCRIPTION_NAME - Name of the Google Pub/Sub subscription. Required for Google Pub/Sub. {% /table %} **MQTT** {% table %} - Name - Description - Default Value --- - MQTT_PORT - Port of the MQTT broker - 1883 --- - MQTT_MESSAGE_ORDER - Enable guaranteed message order - false --- - MQTT_PROTOCOL - Communication protocol. Supported values: tcp, ssl. - tcp --- - MQTT_HOST - Hostname of the MQTT broker - localhost --- - MQTT_USER - Username for the MQTT broker --- - MQTT_PASSWORD - Password for the MQTT broker --- - MQTT_CLIENT_ID_SUFFIX - Suffix appended to the client ID --- - MQTT_QOS - Quality of Service Level --- - MQTT_KEEP_ALIVE - Sends regular messages to check the link is active. May not work as expected if handling func is blocking execution - MQTT_RETRIEVE_RETAINED - Retrieve retained messages on subscription {% /table %} **NATS JetStream** {% table %} - Name - Description - Default Value --- - NATS_SERVER - URL of the NATS server. The NATS driver is wired explicitly via `app.AddPubSub(...)`; this row is a convention only — the actual env-var name is whatever you read from `app.Config.Get(...)` and pass into `nats.Config.Server`. - nats://localhost:4222 --- - NATS_CREDS_FILE - File containing the NATS credentials - creds.json {% /table %} ## Related production guides - **12-Factor Configuration**: [Env-driven config, secrets, and environment parity](/docs/guides/twelve-factor-config) — apply 12-factor methodology to the configs documented above. --- ## https://gofr.dev/docs/references/context # GoFr Context GoFr context is an object injected by the GoFr handler. It contains all the request-specific data, for each request-response cycle a new context is created. The request can be either an HTTP request, gRPC call or a message from Pub-Sub. GoFr Context also embeds the **_container_** which maintains all the dependencies like databases, logger, HTTP service clients, metrics manager, etc. This reduces the complexity of the application as users don't have to maintain and keep track of all the dependencies by themselves. GoFr context is an extension of the Go context, providing a wrapper around the request and response providing user access to dependencies. # Usage ## Reading HTTP requests `ctx.Request` can be used to access the underlying request which provides the following methods to access different parts of the request. - `Context()` - to access the context associated with the incoming request ```go ctx.Request.Context() ``` - `Param(string)` - to access the query parameters present in the request, it returns the value of the key provided ```go // Example: Request is /configs?key1=value1&key2=value2 value := ctx.Request.Param("key1") // value = "value1" ``` - `PathParam(string)` - to retrieve the path parameters ```go // Consider the path to be /employee/{id} id := ctx.Request.PathParam("id") ``` - `Bind(any)` - to access a decoded format of the request body, the body is mapped to the interface provided ```go // incoming request body is // { // "name" : "trident", // "category" : "snacks" // } type product struct{ Name string `json:"name"` Category string `json:"category"` } var p product ctx.Bind(&p) // the Bind() method will map the incoming request to variable p ``` - `Binding multipart-form data / urlencoded form data ` - To bind multipart-form data or url-encoded form, we can use the Bind method similarly. The struct fields should be tagged appropriately to map the form fields to the struct fields. The supported content types are `multipart/form-data` and `application/x-www-form-urlencoded` ```go type Data struct { Name string `form:"name"` Compressed file.Zip `file:"upload"` FileHeader *multipart.FileHeader `file:"file_upload"` } ``` - The `form` tag is used to bind non-file fields. - The `file` tag is used to bind file fields. If the tag is not present, the field name is used as the key. - `HostName()` - to access the host name for the incoming request ```go // for example if request is made from xyz.com host := ctx.Request.HostName() // the host would be http://xyz.com // Note: the protocol if not provided in the headers will be set to http by default ``` - `Params(string)` - to access all query parameters for a given key returning slice of strings. ```go // Example: Request is /search?category=books,electronics&category=tech values := ctx.Request.Params("category") // values = []string{"books", "electronics", "tech"} ``` ## Accessing Authentication Information GoFr provides a helper method to access authentication details from the context. These values are populated when the respective authentication middleware is enabled (see [HTTP Auth Middleware](https://github.com/gofr-dev/gofr/blob/0845d19181d2cc55e12c557fc9ad51adb4ab44fd/examples/using-http-auth-middleware/ReadMe.md) section). ```go info := ctx.GetAuthInfo() ``` ### Methods * **`GetClaims()`** – Returns the JWT claims containing standard fields such as: * `Issuer` – identifies who issued the token. * `Subject` – identifies the principal that is the subject of the token. * `Audience` – identifies the intended recipients of the token. * `NotBefore` – time before which the token is not valid. * `IssuedAt` – time at which the token was issued. * `ExpirationTime` – time after which the token expires. **Requires:** OAuth middleware (`EnableOAuth`) * **`GetUsername()`** – Returns the authenticated username when using Basic Authentication. **Requires:** Basic Auth middleware (`EnableBasicAuthWithValidator`) * **`GetAPIKey()`** – Returns the API key used for authentication. **Requires:** API Key middleware (`EnableAPIKeyAuthWithValidator`) > Note: These values will be available only if the respective authentication middleware is enabled in the application. ## Accessing dependencies GoFr context embeds the container object which provides access to all the injected dependencies by the users. Users can access the fields and methods provided by the **_container_**. --- ## https://gofr.dev/docs/references/gofrcli/init # gofr init The init command initializes a new GoFr project. It sets up the foundational structure for the project and generates a basic "Hello World!" program as a starting point. This allows developers to quickly dive into building their application with a ready-made structure. ## Command Usage ```bash gofr init ``` --- ## See also - [GoFr CLI overview](/docs/references/gofrcli) - [`gofr migrate`](/docs/references/gofrcli/migrate) - [`gofr wrap grpc`](/docs/references/gofrcli/wrap-grpc) - [`gofr store`](/docs/references/gofrcli/store) --- ## https://gofr.dev/docs/references/gofrcli/migrate # gofr migrate create The migrate create command generates a migration template file with pre-defined structure in your migrations directory. This boilerplate code helps you maintain consistent patterns when writing database schema modifications across your project. ## Command Usage ```bash gofr migrate create -name= ``` ## Example Usage ```bash gofr migrate create -name=create_employee_table ``` This command generates a migration directory which has the below files: 1. A new migration file with timestamp prefix (e.g., `20250127152047_create_employee_table.go`) containing: ```go package migrations import ( "gofr.dev/pkg/gofr/migration" ) func create_employee_table() migration.Migrate { return migration.Migrate{ UP: func(d migration.Datasource) error { // write your migrations here return nil }, } } ``` 2. An auto-generated all.go file that maintains a registry of all migrations: ```go // This is auto-generated file using 'gofr migrate' tool. DO NOT EDIT. package migrations import ( "gofr.dev/pkg/gofr/migration" ) func All() map[int64]migration.Migrate { return map[int64]migration.Migrate { 20250127152047: create_employee_table(), } } ``` > **💡 Best Practice:** Learn about [organizing migrations by feature](/docs/advanced-guide/handling-data-migrations#organizing-migrations-by-feature) to avoid creating one migration per table or operation. For detailed instructions on handling database migrations, see the [handling-data-migrations documentation](/docs/advanced-guide/handling-data-migrations) For more examples, see the [using-migrations](https://github.com/gofr-dev/gofr/tree/main/examples/using-migrations) --- ## See also - [GoFr CLI overview](/docs/references/gofrcli) - [`gofr init`](/docs/references/gofrcli/init) - [`gofr wrap grpc`](/docs/references/gofrcli/wrap-grpc) - [`gofr store`](/docs/references/gofrcli/store) --- ## https://gofr.dev/docs/references/gofrcli # GoFR Command Line Interface Managing repetitive tasks and maintaining consistency across large-scale applications is challenging! **GoFr CLI provides the following:** * All-in-one command-line tool designed specifically for GoFr applications * Simplifies **database migrations** management * **Store Layer Generator** for type-safe data access code from YAML configurations * Abstracts **tracing**, **metrics** and structured **logging** for GoFr's gRPC server/client * Enforces standard **GoFr conventions** in new projects ## Prerequisites - Go 1.25 or above. To check Go version use the following command: ```bash go version ``` ## **Installation** To get started with GoFr CLI, use the below commands ```bash go install gofr.dev/cli/gofr@latest ``` To check the installation: ```bash gofr version ``` --- ## Usage The CLI can be run directly from the terminal after installation. Here’s the general syntax: ```bash gofr [flags]=[arguments] ``` --- ## Commands The CLI groups its functionality into four commands. See each subpage for full reference: - [`gofr init`](/docs/references/gofrcli/init) — initialize a new GoFr project with the standard layout. - [`gofr migrate`](/docs/references/gofrcli/migrate) — create database migration templates with timestamped filenames and an auto-generated registry. - [`gofr wrap grpc`](/docs/references/gofrcli/wrap-grpc) — generate gRPC server/client wrappers with built-in tracing, metrics, and logging. - [`gofr store`](/docs/references/gofrcli/store) — generate a type-safe data-access layer from YAML schema definitions. --- ## https://gofr.dev/docs/references/gofrcli/store # gofr store > **Available since:** `gofr-cli` **v0.8.1** The `gofr store` command is a code generator that creates type-safe data access layers from YAML configuration files. It eliminates boilerplate code while maintaining GoFr's best practices for observability and context management. ## **Features** * **YAML-Driven Configuration**: Define your data models and queries in a simple, declarative format. * **Type-Safe Code Generation**: Generates Go interfaces and implementation boilerplates. * **GoFr Context Integration**: Generated methods work with `*gofr.Context` for built-in observability. * **Multiple Stores**: Define all stores in a single YAML file — each gets its own directory. * **Store Registry**: Centralized factory management of all generated stores via `stores/all.go`. ## **Commands** ### **Initialize Store Configuration** Create a new store directory and a `store.yaml` configuration template. **The `-name` flag is required.** ```bash gofr store init -name= ``` **Example:** ```bash gofr store init -name=user ``` This creates the following structure: - `stores/store.yaml` — Configuration file template (shared across all stores). - `stores/all.go` — Store registry factory (auto-generated, DO NOT EDIT). - `stores/user/interface.go` — Initial interface stub (DO NOT EDIT — regenerated by `generate`). - `stores/user/user.go` — Initial implementation stub (editable — add your SQL logic here). ### **Generate Store Code** Generate or update Go code from your store configuration file. ```bash gofr store generate ``` > **💡 Note:** By default, this command looks for the configuration at **`stores/store.yaml`**. To use a different path, use the `-config` flag: > ```bash > gofr store generate -config=path/to/store.yaml > ``` --- ## **Quick Start Example** **Step 1: Initialize Configuration** ```bash gofr store init -name=user ``` **Step 2: Define Your Store in `stores/store.yaml`** ```yaml version: "1.0" stores: - name: "user" package: "user" output_dir: "stores/user" interface: "UserStore" implementation: "userStore" queries: - name: "GetUserByID" sql: "SELECT id, name, email FROM users WHERE id = ?" type: "select" model: "User" returns: "single" params: - name: "id" type: "int64" description: "Retrieves a user by their ID" - name: "GetAllUsers" sql: "SELECT id, name, email FROM users" type: "select" model: "User" returns: "multiple" description: "Retrieves all users" models: - name: "User" fields: - name: "ID" type: "int64" tag: 'db:"id" json:"id"' - name: "Name" type: "string" tag: 'db:"name" json:"name"' - name: "Email" type: "string" tag: 'db:"email" json:"email"' ``` **Step 3: Generate Store Code** ```bash gofr store generate ``` This generates: ```text stores/ ├── store.yaml # Central Configuration ├── all.go # Store registry factory (auto-generated) └── user/ ├── interface.go # UserStore interface definition ├── userStore.go # userStore implementation boilerplate └── user.go # User model struct ``` **Step 4: Use in Your Application** ```go package main import ( "gofr.dev/pkg/gofr" "your-project/stores/user" ) func main() { app := gofr.New() userStore := user.NewUserStore() app.GET("/users/{id}", func(ctx *gofr.Context) (interface{}, error) { id, _ := strconv.ParseInt(ctx.PathParam("id"), 10, 64) return userStore.GetUserByID(ctx, id) }) app.GET("/users", func(ctx *gofr.Context) (interface{}, error) { return userStore.GetAllUsers(ctx) }) app.Run() } ``` --- ## **Multiple Stores in One File** You can define all stores in a single YAML file. Each store gets its own output directory and all are registered into the same `stores/all.go` registry. ```yaml version: "1.0" stores: - name: "user" package: "user" output_dir: "stores/user" interface: "UserStore" implementation: "userStore" queries: [...] - name: "product" package: "product" output_dir: "stores/product" interface: "ProductStore" implementation: "productStore" queries: [...] models: - name: "User" fields: [...] - name: "Product" fields: [...] ``` **Generated structure:** ```text stores/ ├── all.go ├── user/ │ ├── interface.go │ ├── userStore.go │ └── user.go └── product/ ├── interface.go ├── productStore.go └── product.go ``` **Using the registry with multiple stores:** ```go import ( "your-project/stores" "your-project/stores/user" "your-project/stores/product" ) // stores.GetStore returns a factory-created instance userStore := stores.GetStore("user").(user.UserStore) productStore := stores.GetStore("product").(product.ProductStore) ``` > **💡 Note:** `stores.All()` returns a `map[string]func() any` — a map of **factory functions**, not active instances. `stores.GetStore(name)` calls the factory for you and returns the instance. --- ## **Configuration Reference** ### **Store Configuration** | Field | Description | Required | |-------|-------------|----------| | `name` | Store identifier used in the registry key. | **Yes** | | `package` | Go package name for generated code. | **Yes** | | `output_dir` | Directory path where files will be generated. | Optional (defaults to `stores/`) | | `interface` | Interface name — **recommended: `Store`** (e.g., `UserStore`). | Optional (defaults to `Store`) | | `implementation` | Private struct name for the implementation (e.g., `userStore`). | Optional (defaults to `Store`) | | `queries` | List of database queries. | Optional | > **⚠️ Naming Convention:** The registry (`stores/all.go`) uses a hardcoded `Store` pattern when generating constructor calls (e.g., `NewUserStore()`). Always name your interface as `Store` to avoid compilation errors. ### **Query Types** * **`select`** — SELECT queries. * **`insert`** — INSERT queries. * **`update`** — UPDATE queries. * **`delete`** — DELETE queries. ### **Return Types** * **`single`** — Returns `(Model, error)`. * **`multiple`** — Returns `([]Model, error)`. * **`count`** — Returns `(int64, error)`. * **`custom`** — Returns `(any, error)`. ### **Query Parameters** ```yaml params: - name: "id" type: "int64" - name: "email" type: "string" ``` Supported parameter types include all Go primitive types, `time.Time`, and pointer types (e.g., `*int64`). --- ## **Model Generation** ### **Generate New Models** ```yaml models: - name: "User" fields: - name: "ID" type: "int64" tag: 'db:"id" json:"id"' - name: "Name" type: "string" tag: 'db:"name" json:"name"' - name: "CreatedAt" type: "time.Time" tag: 'db:"created_at" json:"created_at"' ``` This generates: ```go type User struct { ID int64 `db:"id" json:"id"` Name string `db:"name" json:"name"` CreatedAt time.Time `db:"created_at" json:"created_at"` } func (User) TableName() string { return "user" } ``` ### **Reference Existing Models** If you already have models defined elsewhere: ```yaml models: - name: "User" path: "../models/user.go" package: "your-project/models" ``` --- ## **Generated Code Structure** ### **Interface (`interface.go`)** ```go // Code generated by gofr.dev/cli/gofr. DO NOT EDIT. package user import "gofr.dev/pkg/gofr" type UserStore interface { GetUserByID(ctx *gofr.Context, id int64) (User, error) GetAllUsers(ctx *gofr.Context) ([]User, error) } ``` ### **Implementation (`userStore.go`)** ```go // Code generated by gofr.dev/cli/gofr. DO NOT EDIT. package user type userStore struct{} func NewUserStore() UserStore { return &userStore{} } func (s *userStore) GetUserByID(ctx *gofr.Context, id int64) (User, error) { // TODO: Implement using ctx.SQL() var result User // err := ctx.SQL().QueryRowContext(ctx, sql, id).Scan(&result.ID, ...) return result, nil } func (s *userStore) GetAllUsers(ctx *gofr.Context) ([]User, error) { // TODO: Implement using ctx.SQL() return []User{}, nil } ``` --- ## **Best Practices** 1. **Implement the TODOs**: The generator creates method **signatures and boilerplate only**. You must fill in the `// TODO: Implement` sections with actual SQL execution using `ctx.SQL()` methods. 2. **Use `Store` Interface Names**: The registry assumes this convention. E.g., `interface: "UserStore"` results in the constructor `NewUserStore()` and type assertion `.(user.UserStore)`. 3. **One YAML, Many Stores**: Define all your stores in a single `store.yaml` to keep your data access layer centrally configured. 4. **Know Which Files Are Auto-Generated**: Only `interface.go` and `all.go` are marked `DO NOT EDIT` and are overwritten on every `gofr store generate`. The implementation stub (`.go`) created by `gofr store init` is editable — this is where you add your SQL logic. The `userStore.go` generated by `gofr store generate` is also editable boilerplate. 5. **Version Control**: Always commit your `store.yaml`. Re-run `gofr store generate` after any configuration change to sync the generated interfaces. --- ## **Complete Example** For a complete working example of the store generator, see the [store example](https://github.com/gofr-dev/gofr-cli/tree/main/store/example.yaml) in the gofr-cli repository. For detailed configuration options and advanced usage, refer to the [Store Generator README](https://github.com/gofr-dev/gofr-cli/blob/main/store/README.md). --- ## See also - [GoFr CLI overview](/docs/references/gofrcli) - [`gofr init`](/docs/references/gofrcli/init) - [`gofr migrate`](/docs/references/gofrcli/migrate) - [`gofr wrap grpc`](/docs/references/gofrcli/wrap-grpc) --- ## https://gofr.dev/docs/references/gofrcli/wrap-grpc # gofr wrap grpc * The gofr wrap grpc command streamlines gRPC integration in a GoFr project by generating GoFr's context-aware structures. * It simplifies setting up gRPC handlers with minimal steps, and accessing datasources, adding tracing as well as custom metrics. Based on the proto file it creates the handler/client with GoFr's context. For detailed instructions on using grpc with GoFr see the [gRPC documentation](/docs/advanced-guide/grpc) ## Command Usage **gRPC Server** ```bash gofr wrap grpc server --proto= ``` ## Generated Files **Server** - ```{serviceName}_gofr.go (auto-generated; do not modify)``` - ```{serviceName}_server.go (example structure below)``` ## Example Usage **gRPC Server** The command generates a server implementation template similar to this: ```go package server import ( "gofr.dev/pkg/gofr" ) // Register the gRPC service in your app using the following code in your main.go: // // service.Register{ServiceName}ServerWithGofr(app, &server.{ServiceName}Server{}) // // {ServiceName}Server defines the gRPC server implementation. // Customize the struct with required dependencies and fields as needed. type {ServiceName}Server struct { } // Example method (actual methods will depend on your proto file) func (s *MyServiceServer) MethodName(ctx *gofr.Context) (any, error) { // Replace with actual logic if needed return &ServiceResponse{ }, nil } ``` For detailed instruction on setting up a gRPC server with GoFr see the [gRPC Server Documentation](https://gofr.dev/docs/advanced-guide/grpc#generating-g-rpc-server-handler-template-using) **gRPC Client** ```bash gofr wrap grpc client --proto= ``` **Client** - ```{serviceName}_client.go (example structure below)``` ## Example Usage: Assuming the service is named hello, after generating the hello_client.go file, you can seamlessly register and access the gRPC service using the following steps: ```go type GreetHandler struct { helloGRPCClient client.HelloGoFrClient } func NewGreetHandler(helloClient client.HelloGoFrClient) *GreetHandler { return &GreetHandler{ helloGRPCClient: helloClient, } } func (g GreetHandler) Hello(ctx *gofr.Context) (any, error) { userName := ctx.Param("name") helloResponse, err := g.helloGRPCClient.SayHello(ctx, &client.HelloRequest{Name: userName}) if err != nil { return nil, err } return helloResponse, nil } func main() { app := gofr.New() // Create a gRPC client for the Hello service helloGRPCClient, err := client.NewHelloGoFrClient(app.Config.Get("GRPC_SERVER_HOST"), app.Metrics()) if err != nil { app.Logger().Errorf("Failed to create Hello gRPC client: %v", err) return } greetHandler := NewGreetHandler(helloGRPCClient) // Register HTTP endpoint for Hello service app.GET("/hello", greetHandler.Hello) // Run the application app.Run() } ``` For detailed instruction on setting up a gRPC server with GoFr see the [gRPC Client Documentation](https://gofr.dev/docs/advanced-guide/grpc#generating-tracing-enabled-g-rpc-client-using) For more examples refer [gRPC Examples](https://github.com/gofr-dev/gofr/tree/main/examples/grpc) --- ## See also - [GoFr CLI overview](/docs/references/gofrcli) - [`gofr init`](/docs/references/gofrcli/init) - [`gofr migrate`](/docs/references/gofrcli/migrate) - [`gofr store`](/docs/references/gofrcli/store) --- ## https://gofr.dev/docs/references/testing # Testing REST APIs with GoFr Testing REST APIs ensures that your endpoints function correctly under various conditions. This guide demonstrates how to write tests for GoFr-based REST APIs. ## Mocking Databases in GoFr Mocking databases allows for isolated testing by simulating various scenarios. GoFr's built-in mock container supports, not only SQL databases, but also extends to other data stores, including Redis, Cassandra, Key-Value stores, MongoDB, and ClickHouse. ## Example of Unit Testing a REST API Using GoFr Below is an example of how to test, say the `Add` method of a handler that interacts with a SQL database. Here’s an `Add` function for adding a book to the database using GoFr: ```go // main.go package main import ( "gofr.dev/pkg/gofr" "gofr.dev/pkg/gofr/http" ) type Book struct { Id int `json:"id"` ISBN int `json:"isbn"` Title string `json:"title"` } func Add(ctx *gofr.Context) (any, error) { var book Book if err := ctx.Bind(&book); err != nil { ctx.Logger.Errorf("error in binding: %v", err) return nil, http.ErrorInvalidParam{Params: []string{"body"}} } // we assume the `id` column in the database is set to auto-increment. res, err := ctx.SQL.ExecContext(ctx, `INSERT INTO books (title, isbn) VALUES (?, ?)`, book.Title, book.ISBN) if err != nil { return nil, err } id, err := res.LastInsertId() if err != nil { return nil, err } return id, nil } func main() { // initialize gofr object app := gofr.New() app.POST("/book", Add) // Run the application app.Run() } ``` Here’s how to write tests using GoFr: ```go // main_test.go package main import ( "bytes" "context" "database/sql" "errors" "net/http" "net/http/httptest" "testing" "github.com/DATA-DOG/go-sqlmock" "github.com/stretchr/testify/assert" "gofr.dev/pkg/gofr" "gofr.dev/pkg/gofr/container" gofrHttp "gofr.dev/pkg/gofr/http" ) func TestAdd(t *testing.T) { type gofrResponse struct { result any err error } // NewMockContainer provides mock implementations for various databases including: // Redis, SQL, ClickHouse, Cassandra, MongoDB, and KVStore. // These mock can be used to define database expectations in unit tests, // similar to the SQL example demonstrated here. mockContainer, mock := container.NewMockContainer(t) ctx := &gofr.Context{ Context: context.Background(), Request: nil, Container: mockContainer, } tests := []struct { name string requestBody string mockExpect func() expectedResponse any }{ { name: "Error while Binding", requestBody: `title":"Book Title","isbn":12345}`, mockExpect: func() { }, expectedResponse: gofrResponse{ nil, gofrHttp.ErrorInvalidParam{Params: []string{"body"}}}, }, { name: "Successful Insertion", requestBody: `{"title":"Book Title","isbn":12345}`, mockExpect: func() { mock.SQL. ExpectExec(`INSERT INTO books (title, isbn) VALUES (?, ?)`). WithArgs("Book Title", 12345). WillReturnResult(sqlmock.NewResult(12, 1)) }, expectedResponse: gofrResponse{ int64(12), nil, }, }, { name: "Error on Insertion", requestBody: `{"title":"Book Title","isbn":12345}`, mockExpect: func() { mock.SQL. ExpectExec(`INSERT INTO books (title, isbn) VALUES (?, ?)`). WithArgs("Book Title", 12345). WillReturnError(sql.ErrConnDone) }, expectedResponse: gofrResponse{ nil, sql.ErrConnDone}, }, { name: "Error while fetching LastInsertId", requestBody: `{"title":"Book Title","isbn":12345}`, mockExpect: func() { mock.SQL. ExpectExec(`INSERT INTO books (title, isbn) VALUES (?, ?)`). WithArgs("Book Title", 12345). WillReturnError(errors.New("mocked result error")) }, expectedResponse: gofrResponse{ nil, errors.New("mocked result error")}, }, } for i, tt := range tests { t.Run(tt.name, func(t *testing.T) { tt.mockExpect() var req *http.Request req = httptest.NewRequest( http.MethodPost, "/book", bytes.NewBuffer([]byte(tt.requestBody)), ) req.Header.Set("Content-Type", "application/json") request := gofrHttp.NewRequest(req) ctx.Request = request val, err := Add(ctx) response := gofrResponse{val, err} assert.Equal(t, tt.expectedResponse, response, "TEST[%d], Failed.\n%s", i, tt.name) }) } } ``` ## Testing HTTP Handlers with Mock Services When you register multiple services with `WithMockHTTPService`, each service gets its own separate mock instance. This allows you to set different expectations for each service using the `mocks.HTTPServices` map. Use table-driven tests to cover multiple scenarios: ### Important Notes - **Context Matching**: Always use the exact context from your `gofr.Context` (`ctx.Context`) in expectations. gomock compares contexts by reference, not value, so using `t.Context()` or `context.Background()` will fail. - **Service Registration**: `WithMockHTTPService("serviceName")` registers the service with the specified name. Each service gets its own separate mock instance. - **Multiple Services**: Use `mocks.HTTPServices["serviceName"]` to access and set different expectations for each service. Each service has its own mock instance, so expectations are independent. - **Tests will fail** if the mocked HTTPService is not called as expected or if the context doesn't match. ```go import ( "encoding/json" "errors" "fmt" "io" "net/http" "net/http/httptest" "strings" "testing" "github.com/gorilla/mux" "github.com/stretchr/testify/assert" "github.com/stretchr/testify/require" "gofr.dev/pkg/gofr" "gofr.dev/pkg/gofr/container" gofrHttp "gofr.dev/pkg/gofr/http" ) // Handler that calls multiple HTTP services // This handler demonstrates calling two different services (paymentService and shippingService) // to fetch order details from different parts of the system. func OrderDetailsHandler(ctx *gofr.Context) (any, error) { orderID := ctx.PathParam("id") if orderID == "" { return nil, errors.New("order ID is required") } // First HTTP service call: Get payment details from paymentService paymentService := ctx.GetHTTPService("paymentService") paymentResp, err := paymentService.Get(ctx.Context, "/payments/"+orderID, nil) if err != nil { return nil, fmt.Errorf("failed to fetch payment details: %w", err) } defer paymentResp.Body.Close() var paymentData struct { Status string `json:"status"` Amount int `json:"amount"` } paymentBody, err := io.ReadAll(paymentResp.Body) if err != nil { return nil, fmt.Errorf("failed to read payment response: %w", err) } if err := json.Unmarshal(paymentBody, &paymentData); err != nil { return nil, fmt.Errorf("failed to parse payment response: %w", err) } // Second HTTP service call: Get shipping details from shippingService shippingService := ctx.GetHTTPService("shippingService") shippingResp, err := shippingService.Get(ctx.Context, "/shipping/"+orderID, nil) if err != nil { return nil, fmt.Errorf("failed to fetch shipping details: %w", err) } defer shippingResp.Body.Close() var shippingData struct { Status string `json:"status"` Tracking string `json:"tracking"` EstimatedDelivery string `json:"estimated_delivery"` } shippingBody, err := io.ReadAll(shippingResp.Body) if err != nil { return nil, fmt.Errorf("failed to read shipping response: %w", err) } if err := json.Unmarshal(shippingBody, &shippingData); err != nil { return nil, fmt.Errorf("failed to parse shipping response: %w", err) } // Combine results from both services return map[string]any{ "order_id": orderID, "payment_status": paymentData.Status, "payment_amount": paymentData.Amount, "shipping_status": shippingData.Status, "tracking_number": shippingData.Tracking, "estimated_delivery": shippingData.EstimatedDelivery, }, nil } func TestOrderDetailsHandler(t *testing.T) { // Helper function to create test context with path parameters createTestContext := func(path string, container *container.Container) *gofr.Context { req := httptest.NewRequest(http.MethodGet, path, nil) // Set path parameters using mux.SetURLVars (required for ctx.PathParam to work) if strings.Contains(path, "/orders/") { parts := strings.Split(strings.Trim(path, "/"), "/") if len(parts) >= 2 && parts[1] != "" { req = mux.SetURLVars(req, map[string]string{"id": parts[1]}) } } return &gofr.Context{ Context: req.Context(), Request: gofrHttp.NewRequest(req), Container: container, } } const testOrderID = "12345" // Reusable order ID for tests tests := []struct { name string setupMocks func(*container.Mocks, *gofr.Context) requestPath string wantErr bool wantErrMsg string validateResult func(*testing.T, any) }{ { name: "successful order details retrieval", setupMocks: func(mocks *container.Mocks, ctx *gofr.Context) { // Set up expectation for paymentService - this is the first HTTP call in the handler paymentResp := &http.Response{ StatusCode: http.StatusOK, Body: io.NopCloser(strings.NewReader(`{"status":"completed","amount":1500}`)), } mocks.HTTPServices["paymentService"].EXPECT().Get( ctx.Context, "/payments/"+testOrderID, nil, ).Return(paymentResp, nil) // Set up expectation for shippingService - this is the second HTTP call in the handler // Note: Each service has its own independent mock instance shippingResp := &http.Response{ StatusCode: http.StatusOK, Body: io.NopCloser(strings.NewReader(`{"status":"in_transit","tracking":"TRACK123","estimated_delivery":"2024-12-25"}`)), } mocks.HTTPServices["shippingService"].EXPECT().Get( ctx.Context, "/shipping/"+testOrderID, nil, ).Return(shippingResp, nil) }, requestPath: "/orders/" + testOrderID, wantErr: false, validateResult: func(t *testing.T, result any) { resultMap := result.(map[string]any) assert.Equal(t, testOrderID, resultMap["order_id"]) assert.Equal(t, "completed", resultMap["payment_status"]) assert.Equal(t, 1500, resultMap["payment_amount"]) assert.Equal(t, "in_transit", resultMap["shipping_status"]) assert.Equal(t, "TRACK123", resultMap["tracking_number"]) assert.Equal(t, "2024-12-25", resultMap["estimated_delivery"]) }, }, { name: "payment service error", setupMocks: func(mocks *container.Mocks, ctx *gofr.Context) { // Payment service returns an error - handler should fail before calling shipping service mocks.HTTPServices["paymentService"].EXPECT().Get( ctx.Context, "/payments/"+testOrderID, nil, ).Return(nil, errors.New("payment service unavailable")) // Shipping service should NOT be called when payment service fails // No expectation set for shippingService - test will fail if it's called }, requestPath: "/orders/" + testOrderID, wantErr: true, wantErrMsg: "failed to fetch payment details", }, { name: "shipping service error", setupMocks: func(mocks *container.Mocks, ctx *gofr.Context) { // Payment service succeeds paymentResp := &http.Response{ StatusCode: http.StatusOK, Body: io.NopCloser(strings.NewReader(`{"status":"completed","amount":1500}`)), } mocks.HTTPServices["paymentService"].EXPECT().Get( ctx.Context, "/payments/"+testOrderID, nil, ).Return(paymentResp, nil) // Shipping service returns an error - this is the second HTTP call mocks.HTTPServices["shippingService"].EXPECT().Get( ctx.Context, "/shipping/"+testOrderID, nil, ).Return(nil, errors.New("shipping service unavailable")) }, requestPath: "/orders/" + testOrderID, wantErr: true, wantErrMsg: "failed to fetch shipping details", }, { name: "missing order ID", setupMocks: func(mocks *container.Mocks, ctx *gofr.Context) { // No service calls should be made when order ID is missing }, requestPath: "/orders/", wantErr: true, wantErrMsg: "order ID is required", }, } // Register HTTP services once - each service gets its own separate mock instance // Since all test cases use the same services, we can create the mock container outside the loop mockContainer, mocks := container.NewMockContainer(t, container.WithMockHTTPService("paymentService", "shippingService"), ) for _, tt := range tests { t.Run(tt.name, func(t *testing.T) { // Create test context using helper function ctx := createTestContext(tt.requestPath, mockContainer) // Set up mock expectations BEFORE calling the handler // Each service's expectations are independent tt.setupMocks(mocks, ctx) // Call the handler result, err := OrderDetailsHandler(ctx) if tt.wantErr { require.Error(t, err) assert.Contains(t, err.Error(), tt.wantErrMsg) assert.Nil(t, result) } else { require.NoError(t, err) if tt.validateResult != nil { tt.validateResult(t, result) } } }) } } ``` **Key Points**: - Each service registered via `WithMockHTTPService` gets its own separate mock instance - Always use `mocks.HTTPServices["serviceName"]` to access and set expectations for a specific service - Always create the `gofr.Context` with the exact request context (`req.Context()`) that will be used in the handler - Set expectations on the mock services before calling the handler - Test both success and error scenarios to ensure your handlers handle all cases correctly ### Summary - **Mocking Database Interactions**: Use GoFr mock container to simulate database interactions. - **Mocking HTTP Services**: Use `WithMockHTTPService("serviceName")` to register and mock HTTP services. - **Context Matching**: Always use `ctx.Context` from your `gofr.Context` in mock expectations, not `t.Context()` or `context.Background()`. - **Define Test Cases**: Create table-driven tests to handle various scenarios. - **Run and Validate**: Ensure that your tests check for expected results, and handle errors correctly. This approach guarantees that your database and HTTP service interactions are tested independently, allowing you to simulate different responses and errors hassle-free. --- # Why GoFr ## https://gofr.dev/why-gofr # Why GoFr? {% answer %} GoFr is an opinionated Go framework focused on microservices. Minimal routers like Gin, Fiber, and Chi keep their surface area small by design and let you assemble the rest of your stack the way you prefer. GoFr makes a different trade-off: it bundles a common production layer — OpenTelemetry tracing, Prometheus metrics, structured logging, datasource clients, migrations, Pub/Sub, gRPC, GraphQL, WebSockets, health checks, circuit breakers, graceful shutdown — with sensible defaults. Both approaches are valid; this page describes the situations where GoFr's trade-off tends to fit. {% /answer %} ## See the difference in 20 lines A REST handler that connects to MySQL, emits OpenTelemetry traces, exports Prometheus metrics, and writes structured logs: **With `net/http` + your stack of choice:** ```go // Init: tracer provider, exporter, propagator, sampler. // Init: prometheus registry, HTTP histogram, label cardinality plan. // Init: structured logger, request-id middleware, log-context plumbing. // Init: sql.DB with connection pool, otelsql instrumentation. // Per-handler: extract span from context, propagate to db query, // record metrics with labels, structured log with trace id. // You write all of this. ~150 lines of glue before you write business logic. ``` **With GoFr:** ```go package main import "gofr.dev/pkg/gofr" func main() { app := gofr.New() app.GET("/users/{id}", func(c *gofr.Context) (any, error) { var name string err := c.SQL.QueryRowContext(c, "SELECT name FROM users WHERE id=?", c.PathParam("id")).Scan(&name) return map[string]string{"name": name}, err }) app.Run() } ``` Tracing, metrics, structured logging with trace IDs, connection pooling, and DB span correlation are emitted automatically. Configuration is `.env` based. ## The trade-off behind opinionated frameworks Microservices often share the same supporting needs: structured logging, request tracing, metrics, datasource clients, message brokers, health checks, circuit breakers, retries, environment-based config, graceful shutdown. With a minimal router, you compose these yourself by bringing libraries like `zap`, `otel-go`, `prometheus/client_golang`, `sqlx`, `sarama`, or `gobreaker`. That's a strength when you want full control over each layer; it's a cost when teams keep wiring similar combinations across many services. GoFr's wager is that this wiring is worth standardizing as a shared default. Some teams will appreciate the time saved; others will prefer the precision of composing their own stack. ## What's actually in GoFr GoFr's positioning, [from the framework's README](https://github.com/gofr-dev/gofr), is: > **An Opinionated Microservice Development Framework — designed to simplify microservice development, with a key focus on Kubernetes deployment and out-of-the-box observability.** Concretely: - **HTTP, gRPC, GraphQL, WebSockets, CLI** — one handler signature `func(*Context) (any, error)` across all of them. - **Auto CRUD handlers** — `app.AddRESTHandlers(&Entity{})` generates Create / Get / GetAll / Update / Delete endpoints from a struct. - **Observability built in** — OpenTelemetry traces (OTLP/Jaeger), Prometheus metrics, structured contextual logging. Configurable sampling. Remote log-level changes without restart. - **15+ datasources** — MySQL, PostgreSQL, Oracle, SQLite, MongoDB, Redis, Cassandra, ScyllaDB, ClickHouse, CockroachDB, Couchbase, DGraph, SurrealDB, ArangoDB, Elasticsearch, Solr, InfluxDB, OpenTSDB. KV-store backends include Badger, DynamoDB, and NATS. All auto-instrumented. - **Pub/Sub** — Kafka, NATS JetStream, Google Pub/Sub, AWS SQS, MQTT, Azure Event Hub. - **File storage** — local filesystem, Amazon S3, Google Cloud Storage, Azure Blob, FTP, SFTP — one interface. - **Service-to-service HTTP client** — circuit breaker, retry, rate limit, connection pool, Basic / API-key / OAuth auth — all configurable per service. - **Migrations** — versioned for SQL, MongoDB, Redis, DGraph, and more. - **Auth & RBAC** — Basic, API key, OAuth (JWKS-validated JWT), config-driven role/permission mappings. - **Built-in Swagger UI** — drop your `openapi.json` in `static/` and `/.well-known/swagger` renders it. - **Cron jobs** — 5- and 6-part expressions with auto-instrumented OpenTelemetry spans per job. - **Graceful shutdown + startup hooks** — `OnStart` for warmup; clean teardown of connections. Each of these is something you can also assemble yourself with libraries you trust. GoFr packages a common combination so you don't have to re-make those choices on every service. ## Who tends to like GoFr - **Teams building microservices on Kubernetes** who want tracing, metrics, and structured logs available from the first commit. - **Engineers coming from Spring Boot, Express, or NestJS** who are used to a "batteries-included" framework and prefer that style. - **Gin / Fiber / Chi users** who find themselves repeatedly writing similar observability, datasource, and resilience plumbing across services and would rather standardize it. ## Where to go next - [Quick Start: Build your first GoFr REST API](/docs/quick-start/introduction) — running in under 5 minutes. - [GoFr vs Gin / Fiber / Echo / Chi](/comparison) — head-to-head on features. - [Migrate from Gin / Fiber / Express / Flask / Spring Boot](/migrate) — concrete code translations. - [Documentation](/docs) — full reference. {% faq %} {% faq-item question="Is GoFr free and open source?" %} Yes. GoFr is licensed under Apache 2.0 and developed in the open at [github.com/gofr-dev/gofr](https://github.com/gofr-dev/gofr). There is no paid tier; the framework is fully usable without commercial licensing. {% /faq-item %} {% faq-item question="Does GoFr replace OpenTelemetry, Prometheus, or my logger?" %} No. GoFr uses OpenTelemetry SDKs, Prometheus client libraries, and structured logging primitives directly. You still export to your existing OTel collector, Prometheus, or log aggregator. GoFr removes the wiring, not the standards. {% /faq-item %} {% faq-item question="Is GoFr production-ready?" %} GoFr has been used in production microservices at companies like American Express, IBM, Walmart, and Mydbops. See the [showcase page](/showcase) for more. {% /faq-item %} {% faq-item question="Can I use GoFr alongside an existing Gin or Fiber service?" %} Yes. GoFr is a separate Go module; you can run a new GoFr service in the same fleet as existing Gin / Fiber / Echo services. Most teams adopt GoFr for new services first, then migrate older ones gradually. {% /faq-item %} {% faq-item question="Does GoFr lock me into specific datasources?" %} No. The datasource interfaces are open — see [Injecting Custom Database Drivers](/docs/advanced-guide/injecting-databases-drivers). Built-in support exists for the most common backends so you don't write that code yourself. {% /faq-item %} {% /faq %} --- # Comparison ## https://gofr.dev/comparison/gofr-vs-chi # GoFr vs Chi {% answer %} **Chi** is a small, idiomatic `net/http`-compatible router that composes beautifully with the standard library — a great fit when minimal dependencies and full control matter. **GoFr** has a wider scope: HTTP routing alongside gRPC, GraphQL, WebSockets, Pub/Sub, cron, migrations, OpenTelemetry tracing, Prometheus metrics, structured logging, datasource clients, and a service-to-service HTTP client with circuit breakers. Different goals, both open source — both have happy users. {% /answer %} ## What Chi is great at - **Idiomatic Go** — `func(http.ResponseWriter, *http.Request)` everywhere; zero magic. - **Lightweight** — small dependency footprint, fast. - **Composable** — works seamlessly with `net/http` middleware, the standard library, and any third-party `net/http`-compatible library. - **Maintained by go-chi/chi** — well-respected in the Go community. ## Where the projects differ Chi takes no position on how you structure your service or which libraries you bring for logging, tracing, datasources, or downstream calls — that's a strength when you want full control and a small dependency footprint. GoFr takes the opposite design choice: it standardizes a common combination of those layers (OpenTelemetry, Prometheus, structured logging, datasource clients with retries, message brokers, circuit breakers, health checks) so teams maintaining several services don't make the same composition choices repeatedly. Both approaches have their place. ### Side-by-side: a service that calls a database and emits a trace **Chi (with manual wiring):** ```go import ( "database/sql" "log/slog" "github.com/go-chi/chi/v5" "github.com/prometheus/client_golang/prometheus/promhttp" "go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp" "go.opentelemetry.io/otel" // ... otel exporter setup, prometheus registry setup, db driver, slog setup ) // You write tracer init, metrics init, logger init, DB connection, // then wrap your handler with otelhttp, register Prom on /metrics, // and propagate a request-scoped logger. ``` **GoFr:** ```go package main import "gofr.dev/pkg/gofr" func main() { app := gofr.New() app.GET("/users/{id}", func(c *gofr.Context) (any, error) { var name string err := c.SQL.QueryRowContext(c, "SELECT name FROM users WHERE id=?", c.PathParam("id")).Scan(&name) return map[string]string{"name": name}, err }) app.Run() } ``` Tracing, metrics, structured logging with trace IDs, and DB span correlation are emitted automatically. ## When GoFr might be a good fit - You're maintaining several services and the same wiring keeps reappearing in each. - You'd like gRPC, Pub/Sub, GraphQL, or WebSockets alongside HTTP under one framework. - Auto-instrumented database clients fit your operational model. - Consistent configuration and observability defaults matter to you across multiple services. {% faq %} {% faq-item question="Can I use Chi-style net/http middleware in GoFr?" %} Yes. GoFr's `UseMiddleware` accepts `func(http.Handler) http.Handler` — the standard `net/http` signature Chi uses. {% /faq-item %} {% faq-item question="Does GoFr support route patterns like Chi's?" %} GoFr supports path parameters, wildcards, and method-specific routing. The exact syntax differs slightly; see the routing reference. {% /faq-item %} {% /faq %} --- ## https://gofr.dev/comparison/gofr-vs-echo # GoFr vs Echo {% answer %} **Echo** is a clean, ergonomic HTTP framework with a polished API and a good middleware curation — well suited for HTTP APIs where you want to compose your own production stack. **GoFr** has a wider scope: alongside HTTP routing it bundles OpenTelemetry tracing, Prometheus metrics, datasource clients, gRPC, GraphQL, WebSockets, Pub/Sub, migrations, cron, and a resilient service-to-service HTTP client. Two different scopes; pick the one that matches your project. {% /answer %} ## What Echo is great at - **Clean, ergonomic API** — `c.JSON`, `c.Bind`, group routing, middleware composition feel polished. - **Performance** — competitive with Gin on `net/http`-based benchmarks. - **Strong middleware ecosystem** — official middleware for JWT, rate limit, CORS, logger, recover, etc. - **Built-in HTTP/2 and graceful shutdown** — production-ready HTTP defaults. ## Where the scopes differ | Concern | Echo | GoFr | |---|---|---| | HTTP routing & middleware | Yes | Yes | | OpenTelemetry tracing | Via middleware library | Built in | | Prometheus metrics | Via middleware library | Built in | | Structured logging with request context | Via library | Built in | | Database clients (MySQL, Mongo, Redis, etc.) | Bring your own | 15+ built in, auto-instrumented | | gRPC server | Run separately | Built in | | GraphQL | Bring your own (gqlgen) | Built in | | Pub/Sub | Bring your own (Kafka, NATS) | Built in | | Cron jobs | Bring your own | Built in | | Database migrations | Bring your own (golang-migrate) | Built in | | Service-to-service HTTP w/ circuit breaker | Bring your own | Built in | | RBAC | Build it | Config-driven | | Health endpoints | Define manually | Auto-exposed at `/.well-known/health` | ### Hello world **Echo:** ```go package main import "github.com/labstack/echo/v4" func main() { e := echo.New() e.GET("/hello", func(c echo.Context) error { return c.JSON(200, map[string]string{"message": "Hello, world"}) }) e.Start(":8000") } ``` **GoFr:** ```go package main import "gofr.dev/pkg/gofr" func main() { app := gofr.New() app.GET("/hello", func(c *gofr.Context) (any, error) { return "Hello, world", nil }) app.Run() } ``` ## When GoFr might be a good fit - You'd prefer the production layer bundled rather than composed. - gRPC, GraphQL, Pub/Sub, WebSockets, or cron alongside HTTP are useful for your work. - You'd like consistent observability and configuration across multiple services. {% faq %} {% faq-item question="Does GoFr have an equivalent of Echo's grouped routes?" %} GoFr does not have a one-line `Group` equivalent. Replicate it by composing handlers with shared helpers and registering middleware globally with `app.UseMiddleware`. {% /faq-item %} {% faq-item question="Can I migrate Echo handlers to GoFr?" %} The mental model translates well: `echo.Context.JSON(200, x)` becomes `return x, nil`. Bind, path params, and query params have direct equivalents on `gofr.Context`. {% /faq-item %} {% /faq %} --- ## https://gofr.dev/comparison/gofr-vs-fiber # GoFr vs Fiber {% answer %} **Fiber** is an Express-inspired HTTP framework built on `fasthttp` — a great choice when you want a familiar API for Node.js refugees and high HTTP throughput. **GoFr** sits on `net/http` and has a wider scope: alongside HTTP routing it bundles OpenTelemetry tracing, Prometheus metrics, datasource clients, gRPC, GraphQL, WebSockets, Pub/Sub, cron, migrations, and circuit breakers. Different trade-offs, both open source — pick whichever fits the work in front of you. {% /answer %} ## What Fiber is great at - **Performance** — built on `fasthttp`, regularly outperforms `net/http`-based frameworks on synthetic benchmarks. - **Express-like API** — feels natural for developers from Node.js. - **Built-in WebSocket** — rich HTTP feature set out of the box. - **Active ecosystem** — many official middleware packages. ## Where they diverge ### HTTP foundation Fiber's foundation is `fasthttp`, which is **not compatible with `net/http`**. Some Go libraries assume `http.ResponseWriter`/`http.Request` and won't drop into a Fiber handler without an adapter. GoFr is built on `net/http`, so the standard library and any `net/http`-compatible middleware works. ### Scope beyond HTTP Fiber focuses on HTTP. For other protocols, you'd add separate libraries (which works well — the Go ecosystem has good options for each). GoFr bundles those protocols under the same configuration and observability: ```go app.RegisterService(serviceDesc, impl) // gRPC service app.GraphQLQuery("user", userResolver) app.Subscribe("orders", orderHandler) app.AddCronJob("0 * * * *", "billing", run) // every hour at :00 ``` ### Observability and datasources Fiber middleware exists for OpenTelemetry and Prometheus, but you wire them in. In GoFr, traces / metrics / structured logs are emitted by default with no setup beyond pointing at your collectors via env vars. GoFr ships clients for MySQL, PostgreSQL, Mongo, Redis, Cassandra, ClickHouse, Kafka, NATS, S3, GCS, and a dozen more — all auto-instrumented. ## When GoFr might be a good fit - You'd like gRPC, Pub/Sub, GraphQL, WebSockets, or cron alongside HTTP without separately wiring them up. - OpenTelemetry tracing and Prometheus metrics by default fit your operational model. - Auto-instrumented database clients save you wiring time you'd rather spend elsewhere. - You're maintaining several services and would prefer a single configuration model across them. ## Migration Already on Fiber? See the [Migrate from Fiber guide](/migrate/from-fiber) for concrete code translations. {% faq %} {% faq-item question="Can Fiber use net/http middleware?" %} With an adapter, yes — Fiber provides `adaptor.HTTPHandler` to wrap `net/http` middleware. There's a small overhead per call. GoFr uses `net/http` natively, so no adapter is needed. {% /faq-item %} {% faq-item question="Does GoFr have a fasthttp-based mode?" %} No. GoFr is built on `net/http` and prioritizes ecosystem compatibility. {% /faq-item %} {% /faq %} --- ## https://gofr.dev/comparison/gofr-vs-gin # GoFr vs Gin {% answer %} **Gin** is a fast, minimal HTTP router with a familiar API and a mature middleware ecosystem — a great fit when you want a thin router and to compose the rest of your stack yourself. **GoFr** has a wider scope: alongside HTTP routing it bundles OpenTelemetry tracing, Prometheus metrics, structured logging, datasource clients, gRPC, GraphQL, WebSockets, Pub/Sub, migrations, cron, circuit breakers, and health checks. Two different trade-offs; both are open source. {% /answer %} ## What Gin is great at - **Performance** — minimal overhead on top of `net/http`, fast routing. - **Familiar API** — `c.JSON`, `c.Bind`, `c.Param` patterns are intuitive. - **Mature middleware ecosystem** — community packages for almost everything. - **Stable, large community** — battle-tested in production. ## Where the projects differ Gin is intentionally focused on routing. Anything beyond routing — observability, database access, message brokers, retries, circuit breakers, health checks — is something you compose by picking libraries you trust. That's a deliberate strength when you want full control. GoFr takes the opposite design choice: it bundles a common production layer behind one configuration surface so teams maintaining several services don't make those composition choices repeatedly. Neither is universally better — pick the one that matches how your team prefers to work. ### Hello world side-by-side **Gin:** ```go package main import "github.com/gin-gonic/gin" func main() { r := gin.Default() r.GET("/hello", func(c *gin.Context) { c.JSON(200, gin.H{"message": "Hello, world"}) }) r.Run(":8000") } ``` **GoFr:** ```go package main import "gofr.dev/pkg/gofr" func main() { app := gofr.New() app.GET("/hello", func(c *gofr.Context) (any, error) { return "Hello, world", nil }) app.Run() } ``` ### Adding tracing, metrics, and a Postgres connection **Gin** — pull in `otelgin`, `otelhttp`, `prometheus/client_golang`, `pgx`. Configure each. Wire them together. Make sure trace IDs propagate from request → DB query. **GoFr** — set `TRACE_EXPORTER`, `TRACER_URL`, `METRICS_PORT`, and `DB_HOST` in `.env`. Call `c.SQL` to query. Traces, metrics, and structured logs are emitted automatically. (`TRACER_HOST` / `TRACER_PORT` are deprecated — the runtime logs a warning if you set them; use `TRACER_URL` instead.) ### Service-to-service HTTP with circuit breaker ```go // Register a downstream service once at startup: app.AddHTTPService("payments", "https://payments.internal") // Inside any handler, look it up via the request context: func chargeHandler(ctx *gofr.Context) (any, error) { resp, err := ctx.GetHTTPService("payments").Get(ctx, "/charge", nil) // ... } ``` Circuit breaker, retry, rate limit, connection pool, and auth are configurable through the service registration. ### gRPC, Pub/Sub, cron, WebSockets ```go app.RegisterService(serviceDesc, impl) // gRPC app.Subscribe("orders", orderHandler) // Pub/Sub (Kafka, NATS, etc.) app.AddCronJob("0 * * * *", "billing", run) // Cron app.WebSocket("/stream", wsHandler) // WebSocket ``` ## When GoFr might be a good fit - You'd prefer tracing, metrics, and structured logs available by default rather than composed. - You'd like gRPC, GraphQL, Pub/Sub, or WebSockets alongside HTTP under one framework. - You maintain several similar services and would rather standardize the production wiring once. - You're deploying to Kubernetes and want health checks, graceful shutdown, and consistent configuration as defaults. ## Migration Already on Gin? See the [Migrate from Gin guide](/migrate/from-gin) for concrete code translations. {% faq %} {% faq-item question="Can I use Gin middleware in GoFr?" %} Not directly — GoFr has its own middleware signature `func(http.Handler) http.Handler` which is the standard `net/http` pattern, not Gin's `gin.HandlerFunc`. Translating a typical Gin middleware is straightforward; see the migration guide. {% /faq-item %} {% faq-item question="Does GoFr support all of Gin's request binding?" %} GoFr supports JSON, form, multipart, path params, and query params via `ctx.Bind`, `ctx.PathParam`, `ctx.Param`. Validation is left to the choice of library. {% /faq-item %} {% /faq %} --- ## https://gofr.dev/comparison # GoFr vs Gin, Fiber, Echo & Chi {% answer %} GoFr, Gin, Fiber, Echo, and Chi are all open-source projects in the same space, with different scopes. **Gin, Fiber, Echo, and Chi are minimal HTTP routers** — by design — and let teams compose observability, datasources, gRPC, Pub/Sub, and resilience patterns from the libraries of their choosing. **GoFr is a microservice framework** with a wider scope: HTTP routing alongside OpenTelemetry tracing, Prometheus metrics, structured logging, datasource clients, migrations, Pub/Sub, gRPC, GraphQL, WebSockets, cron, and a service-to-service HTTP client with circuit breakers — all bundled with defaults you can override. The matrix below shows the differences without taking a position on which is "better". {% /answer %} ## At-a-glance feature matrix | Feature | GoFr | Gin | Fiber | Echo | Chi | |---|---|---|---|---|---| | HTTP routing | Yes | Yes | Yes | Yes | Yes | | Middleware system | Yes | Yes | Yes | Yes | Yes | | Auto CRUD handlers from struct | Yes | No | No | No | No | | gRPC server (built-in) | Yes | No | No | No | No | | GraphQL server (built-in) | Yes | No | No | No | No | | WebSocket server + client | Yes | Via library | Yes (server) | Via library | Via library | | OpenTelemetry tracing (built-in) | Yes | Via library | Via library | Via library | Via library | | Prometheus metrics (built-in) | Yes | Via library | Via library | Via library | Via library | | Structured logging (built-in) | Yes | Via library | Via library | Via library | Via library | | Remote log-level change | Yes | No | No | No | No | | 15+ datasource clients (built-in) | Yes | No | No | No | No | | Pub/Sub (Kafka, NATS, GCP, MQTT, SQS, Azure) | Yes | No | No | No | No | | Database migrations | Yes | No | No | No | No | | Service-to-service HTTP w/ circuit breaker | Yes | No | No | No | No | | Cron jobs | Yes | No | No | No | No | | Auth: Basic / API key / JWT (JWKS) | Yes | Via library | Via library | Via library | Via library | | RBAC (config-driven) | Yes | No | No | No | No | | Health checks (incl. datasource health) | Yes | Manual | Manual | Manual | Manual | | Swagger UI built in | Yes | Via library | Via library | Via library | Via library | | Built on net/http | Yes | Yes | No (fasthttp) | Yes | Yes | | License | Apache 2.0 | MIT | MIT | MIT | MIT | ## When GoFr might be a good fit - You'd like observability, datasources, Pub/Sub, and resilience patterns bundled with a single configuration surface rather than composed yourself. - You're maintaining several similar microservices and would prefer not to re-make the same OpenTelemetry / Prometheus / Kafka / migration choices for each one. - You want gRPC, GraphQL, WebSockets, and HTTP under one consistent handler signature. - Your deployment target is Kubernetes and out-of-the-box health checks, structured logging, and graceful shutdown are useful defaults. ## Per-framework deep dives - [GoFr vs Gin →](/comparison/gofr-vs-gin) - [GoFr vs Fiber →](/comparison/gofr-vs-fiber) - [GoFr vs Echo →](/comparison/gofr-vs-echo) - [GoFr vs Chi →](/comparison/gofr-vs-chi) ## Migration Already on one of these? Migration guides with code translations: - [Migrate from Gin →](/migrate/from-gin) - [Migrate from Fiber →](/migrate/from-fiber) {% faq %} {% faq-item question="Can I migrate from Gin / Fiber / Echo to GoFr?" %} Yes. The mental model is similar (handler → router → middleware), and GoFr's handler signature is straightforward to adopt. See the migration guides. {% /faq-item %} {% faq-item question="What about Beego, Revel, or other older frameworks?" %} Beego, Revel, and Buffalo are full-stack frameworks that include templating, ORM, and asset pipelines. GoFr is scoped to microservices and APIs, with no template engine or ORM, so the comparison is mostly one of scope rather than competition. {% /faq-item %} {% /faq %} --- # Migration guides ## https://gofr.dev/migrate/from-aspnet-core # Migrate from ASP.NET Core to GoFr {% answer %} ASP.NET Core teams adopting GoFr keep the same operational shape — opinionated framework, built-in DI, configuration, logging, health checks, OpenTelemetry — but lose the class-and-attribute style. Controllers become handler functions, the `IServiceCollection` DI container becomes constructor passing, and `appsettings.json` becomes `.env` files in `configs/`. {% /answer %} {% callout title="Migrating with an AI assistant?" %} Hand [https://gofr.dev/AGENTS.md](https://gofr.dev/AGENTS.md) to your coding assistant (Claude Code, Cursor, Codex, Aider). It contains the framework conventions, routing/binding/datasource patterns, and per-framework cheat-sheets so the assistant can translate handlers without you re-explaining GoFr. {% /callout %} ## Mental model | ASP.NET Core | GoFr | |---|---| | `[ApiController]` + `[Route("api/users")]` | `app.GET("/api/users", handler)` | | `[HttpGet("{id}")]` | `app.GET("/api/users/{id}", handler)` | | `[FromBody] CreateUserDto dto` | `var dto CreateUser; c.Bind(&dto)` | | `[FromQuery]`, `[FromRoute]` | `c.Param("q")`, `c.PathParam("id")` | | `IServiceCollection` / `IServiceProvider` | Constructor passing; datasources via `*gofr.Context` | | `appsettings.json` + environment overlays | `configs/.env` + per-environment files | | `Configuration.GetSection(...)` | `app.Config.Get(key)` | | Middleware pipeline (`app.UseX`) | `app.UseMiddleware(...)` | | `IHostedService` / `BackgroundService` | Goroutines started in `OnStart`, or cron jobs | | `IHttpClientFactory` + Polly | `app.AddHTTPService` (circuit breaker + retry + rate limit built-in) | | Health Checks UI | `/.well-known/health` (auto) | | Serilog / `ILogger` | Built-in structured JSON logger | | `dotnet ef migrations` | GoFr SQL migrations | | Hangfire / Quartz | `app.AddCronJob(...)` and Pub/Sub subscribers | ## Side-by-side: controller ↔ handler **ASP.NET Core:** ```csharp [ApiController] [Route("api/users")] public class UsersController : ControllerBase { private readonly IUserService _users; public UsersController(IUserService users) => _users = users; [HttpPost] public async Task Create([FromBody] CreateUserDto dto) { var user = await _users.CreateAsync(dto); return Ok(user); } } ``` **GoFr:** ```go type UsersHandler struct { Users UserService } func (h *UsersHandler) Create(c *gofr.Context) (any, error) { var dto CreateUser if err := c.Bind(&dto); err != nil { return nil, err } return h.Users.Create(c, dto) } func main() { app := gofr.New() h := &UsersHandler{Users: NewUserService()} app.POST("/api/users", h.Create) app.Run() } ``` ## Configuration: appsettings.json → .env **ASP.NET Core (`appsettings.json`):** ```json { "ConnectionStrings": { "Default": "Server=localhost;Database=app;User Id=root" }, "Logging": { "LogLevel": { "Default": "Information" } } } ``` **GoFr (`configs/.env`):** ```bash DB_HOST=localhost DB_NAME=app DB_USER=root LOG_LEVEL=INFO ``` Environment-specific overrides layer on top: GoFr reads `configs/.env` first, then overlays `configs/..env` (so `APP_ENV=production` overlays `configs/.production.env`). Note the leading dot and the `.env` suffix on the override file. This is a natural fit for Kubernetes ConfigMaps and Secrets. ## Dependency injection ASP.NET Core's `IServiceCollection` (transient/scoped/singleton) is replaced by: - **Constructor passing** — pass dependencies into your handler structs at startup. Sufficient for almost all services. - **`*gofr.Context`** — datasources (SQL, Redis, Mongo, Pub/Sub clients, HTTP services) are accessed through the request context, so per-request "scoped" services come for free. - **Wire / Fx** — if you want a generated DI graph, both libraries integrate cleanly. ## Middleware pipeline ASP.NET Core's `app.UseAuthentication().UseAuthorization()` style maps to: ```go app.UseMiddleware(authMiddleware) app.UseMiddleware(rbacMiddleware) ``` Built-in auth options include Basic, API Key, and OAuth/JWT — see [authentication](/docs/advanced-guide/authentication). RBAC is supported on top. ## Datasources `Entity Framework Core`-style ORM is not built in. GoFr provides connection-pooled SQL clients with observability — pair with `sqlc` for type-safe queries if you want EF-like ergonomics. SQL (MySQL/Postgres/Oracle/SQLite/SQL Server), MongoDB, Redis, Cassandra, ScyllaDB, Couchbase, ArangoDB, Dgraph, SurrealDB are supported, with migrations for SQL/Mongo/Redis/Dgraph. ## Observability OTLP is the lingua franca on both sides — point GoFr at the same collector you already use for `OpenTelemetry.Exporter.OpenTelemetryProtocol`. GoFr emits OpenTelemetry traces, Prometheus metrics at `/metrics`, structured JSON logs with trace IDs, and exposes health at `/.well-known/health`. Log levels can be changed at runtime via the [remote log-level endpoint](/docs/advanced-guide/remote-log-level-change). ## Gradual adoption Stand up a GoFr microservice next to your ASP.NET Core service. From GoFr, call back into the legacy service through `app.AddHTTPService("legacy", baseURL)` with built-in circuit breaker, retries, and rate limiting. Move endpoints across at the gateway, one bounded context at a time. {% faq %} {% faq-item question="Can I run ASP.NET Core and GoFr in the same cluster?" %} Yes. Both are stateless HTTP/gRPC services. Wire shared OTLP collectors, share auth tokens, and the two interoperate cleanly. {% /faq-item %} {% faq-item question="What replaces Entity Framework migrations?" %} GoFr SQL migrations — versioned, ordered up-migrations applied at boot. See the [migrations guide](/docs/advanced-guide/handling-data-migrations). {% /faq-item %} {% faq-item question="What about gRPC services and interceptors?" %} Supported directly — register your generated `pb` server with GoFr, attach interceptors. See the [gRPC guide](/docs/advanced-guide/grpc). {% /faq-item %} {% /faq %} --- ## https://gofr.dev/migrate/from-chi # Migrate from chi to GoFr {% answer %} chi is a router; GoFr is a framework. Migrating means dropping a lot of glue you wrote yourself — logging, tracing, metrics, datasource pooling, health endpoints, retry/circuit-breaker on outbound calls — and accepting GoFr's opinions on response shape and configuration. Handlers change from `http.HandlerFunc` to `func(c *gofr.Context) (any, error)`. {% /answer %} {% callout title="Migrating with an AI assistant?" %} Hand [https://gofr.dev/AGENTS.md](https://gofr.dev/AGENTS.md) to your coding assistant (Claude Code, Cursor, Codex, Aider). It contains the framework conventions, routing/binding/datasource patterns, and per-framework cheat-sheets so the assistant can translate handlers without you re-explaining GoFr. {% /callout %} ## Mental model: router vs framework chi's design goal is "a thin, idiomatic, `net/http`-compatible router". You bring everything else: a logger, OpenTelemetry instrumentation, Prometheus middleware, your own datasource pools, your own retry library, your own health endpoint. That's a feature when you want full control. It becomes a tax when every microservice in your fleet ends up reassembling the same five libraries. GoFr makes the opposite trade: an opinionated handler signature in exchange for built-in observability, datasource clients, resilience on outbound HTTP, and health out of the box. If your chi service is mostly your own glue around the router, the migration mostly deletes code. ## Handler translation **chi:** ```go r := chi.NewRouter() r.Get("/users/{id}", func(w http.ResponseWriter, r *http.Request) { id := chi.URLParam(r, "id") user, err := db.GetUser(id) if err != nil { http.Error(w, err.Error(), http.StatusNotFound) return } json.NewEncoder(w).Encode(user) }) ``` **GoFr:** ```go app := gofr.New() app.GET("/users/{id}", func(c *gofr.Context) (any, error) { id := c.PathParam("id") return db.GetUser(id) }) ``` The path syntax (`{id}`) is identical. You no longer touch `http.ResponseWriter` directly for typical JSON responses. ## Request binding chi has no body binding — you reach for `json.NewDecoder(r.Body).Decode(&v)`. In GoFr: ```go var input CreateUser if err := c.Bind(&input); err != nil { return nil, err } ``` `c.Bind` handles JSON, form, and multipart. ## Param access | Operation | chi | GoFr | |---|---|---| | Path param | `chi.URLParam(r, "id")` | `c.PathParam("id")` | | Query param | `r.URL.Query().Get("q")` | `c.Param("q")` | | Header | `r.Header.Get("X-Foo")` | Read in custom middleware (`func(http.Handler) http.Handler`) on the underlying `*http.Request`; `c.Request` is the abstract `gofr.Request` interface and does not expose `Header` | | Raw `*http.Request` | `r` | Not exposed on `c.Request`; `c.Request` is the `gofr.Request` interface (`Param`, `PathParam`, `Bind`, `HostName`, `Params`, `Context`). Reach the raw request through middleware if needed | ## Middleware chi middleware is `func(http.Handler) http.Handler` — and so is GoFr's. Most chi middleware can be adapted by changing the registration call: **chi:** ```go r.Use(myMiddleware) ``` **GoFr:** ```go app.UseMiddleware(myMiddleware) ``` You can usually delete chi middleware that exists only for cross-cutting infra (`chi/middleware.Logger`, `chi/middleware.Recoverer`, OTel/Prom adapters) — GoFr already provides those. ## Route groups and sub-routers chi's `r.Route("/api/v1", func(r chi.Router) { ... })` pattern doesn't have a one-line equivalent in GoFr. The pragmatic translation is to register a path prefix per route, or wrap a small helper that closes over the prefix. For larger surfaces, model bounded contexts as separate handler structs and register their methods. ## Render package If you used `go-chi/render` for `render.JSON(w, r, v)`, the GoFr equivalent is just `return v, nil`. Error responses are produced from `return nil, err` and shaped by GoFr's [error handling](/docs/advanced-guide/gofr-errors). ## Datasources In a chi service you typically `sql.Open` yourself, manage a `*sql.DB`, set pool sizes, and instrument it. GoFr auto-initializes SQL and Redis from environment variables — set `DB_DIALECT`, `DB_HOST`, `DB_PORT`, `DB_USER`, `DB_PASSWORD`, `DB_NAME` (or `REDIS_HOST`, `REDIS_PORT`) in `configs/.env` and `gofr.New()` wires the connection. Other clients are registered explicitly with a provider: ```go app.AddMongo(mongo.New(mongo.Config{/* ... */})) ``` Inside handlers, use `c.SQL`, `c.Redis`, `c.Mongo`. SQL (MySQL/Postgres/Oracle/SQLite/SQL Server), Redis, Mongo, Cassandra, ScyllaDB, Couchbase, ArangoDB, Dgraph, SurrealDB. SQL/Mongo/Redis/Dgraph migrations are first-class — see [migrations](/docs/advanced-guide/handling-data-migrations). ## Observability Where a chi service typically wires `otelhttp`, `prometheus/promhttp`, a logger, and a `/healthz` endpoint by hand, GoFr ships OpenTelemetry tracing, Prometheus metrics at `/metrics`, structured JSON logs with trace IDs, and `/.well-known/health` automatically. ## Outbound HTTP For service-to-service calls, instead of layering Hystrix-style libraries onto an `http.Client`: ```go app.AddHTTPService("payments", "http://payments:8000") ``` Circuit breaker, retries, and rate limiting are configured per service. ## Gradual adoption Run a new GoFr service alongside your chi services. From GoFr, call into the chi service via `app.AddHTTPService` with built-in resilience. Move endpoints across at your gateway one bounded context at a time. {% faq %} {% faq-item question="Can I run chi and GoFr in the same cluster?" %} Yes — both are stateless Go HTTP servers. Bridge them via HTTP through `app.AddHTTPService` or via a shared Pub/Sub topic. {% /faq-item %} {% faq-item question="Will I lose chi's raw performance?" %} GoFr uses a comparable router under the hood; the perf difference at typical service throughput is dwarfed by what your handlers and datasources do. The honest trade-off is opinionated response shape, not throughput. {% /faq-item %} {% faq-item question="Can I keep using `http.HandlerFunc`-style middleware?" %} Yes — GoFr's `UseMiddleware` accepts `func(http.Handler) http.Handler`, so most chi middleware drops in unchanged. {% /faq-item %} {% /faq %} --- ## https://gofr.dev/migrate/from-django-rest # Migrate from Django REST Framework to GoFr {% answer %} Django REST Framework's `ModelViewSet` + `ModelSerializer` pattern maps onto GoFr's `AddRESTHandlers`, which generates the standard CRUD surface against a Go struct. The Django ORM is replaced by GoFr's SQL clients (with explicit queries — no ORM); DRF permissions become GoFr RBAC and middleware; `settings.py` becomes `.env` in `configs/`. {% /answer %} {% callout title="Migrating with an AI assistant?" %} Hand [https://gofr.dev/AGENTS.md](https://gofr.dev/AGENTS.md) to your coding assistant (Claude Code, Cursor, Codex, Aider). It contains the framework conventions, routing/binding/datasource patterns, and per-framework cheat-sheets so the assistant can translate handlers without you re-explaining GoFr. {% /callout %} ## Mental model | Django REST | GoFr | |---|---| | `ModelViewSet` | `app.AddRESTHandlers(&Entity{})` (auto CRUD) | | `APIView` | `app.GET/POST/...` with handler functions | | `ModelSerializer` | Go struct + JSON tags | | Validators on serializer fields | Struct validation tags + a validator library | | `request.data` | `c.Bind(&dto)` | | URL routers / `router.register` | `app.GET/POST(...)` per route | | `IsAuthenticated`, custom permissions | GoFr Basic / APIKey / OAuth-JWT auth + RBAC | | `settings.py` | `configs/.env` | | Django signals | No direct equivalent — use Pub/Sub for cross-service events | | `manage.py migrate` | GoFr SQL migrations | | Celery | `app.AddCronJob(...)` and Pub/Sub subscribers | | `django-prometheus` / `OpenTelemetry` | Built into GoFr | ## Side-by-side: ViewSet ↔ AddRESTHandlers **Django REST:** ```python class UserViewSet(viewsets.ModelViewSet): queryset = User.objects.all() serializer_class = UserSerializer router = DefaultRouter() router.register('users', UserViewSet) ``` **GoFr:** ```go type User struct { ID int `json:"id"` Name string `json:"name"` Email string `json:"email"` } func main() { app := gofr.New() if err := app.AddRESTHandlers(&User{}); err != nil { // GET, POST, GET/{id}, PUT/{id}, DELETE/{id} app.Logger().Fatal(err) } app.Run() } ``` `AddRESTHandlers` reads the struct, infers the table, and exposes the five standard CRUD endpoints. For anything custom, fall back to plain `app.GET/POST/...` handlers. See the [REST scaffolding guide](/docs/quick-start/add-rest-handlers). ## Custom views For non-CRUD logic, write a handler: ```go app.POST("/users/{id}/reset-password", func(c *gofr.Context) (any, error) { id := c.PathParam("id") var dto ResetPassword if err := c.Bind(&dto); err != nil { return nil, err } return resetPassword(c, id, dto) }) ``` ## Serializers and validation DRF serializers do three jobs: parsing, validating, and shaping the response. In GoFr each is explicit: - **Parsing** — `c.Bind(&dto)` for JSON / form / multipart. - **Validating** — pair the bound struct with `go-playground/validator` (tag-based) or write checks in the handler. - **Shaping** — return a typed struct; the response is the struct. ```go type CreateUser struct { Name string `json:"name" validate:"required,min=3"` Email string `json:"email" validate:"required,email"` } ``` ## ORM to SQL drivers This is the largest mental shift. GoFr does not ship an ORM. You write SQL — typically via `c.SQL.Query` / `Exec` — and pair it with `sqlc` if you want type-safe generated code, or `gorm` if you want ORM-like ergonomics. Both work fine inside GoFr handlers. Plan to replace queryset chains with explicit SQL. Migrate the data model with [GoFr SQL migrations](/docs/advanced-guide/handling-data-migrations) — versioned files applied at boot. ## Permissions and auth DRF's `permission_classes` map to a combination of GoFr authentication middleware (Basic, API Key, OAuth-JWT) and RBAC. See [authentication](/docs/advanced-guide/authentication). Per-request user identity is available via the request context. ## Pagination and filtering DRF's `PageNumberPagination` / `LimitOffsetPagination` and DjangoFilterBackend don't have a built-in equivalent. The idiom is explicit: ```go page := c.Param("page") limit := c.Param("limit") // translate to LIMIT/OFFSET in your SQL ``` This is honest extra work; the trade-off is no implicit query generation surprising you in production. ## Signals and async Django signals (`post_save`, etc.) don't translate directly — they're an in-process pub/sub. The cross-service equivalent is GoFr Pub/Sub: emit a domain event from the handler, subscribe in another service. Publish from inside a handler — `GetPublisher` is on `*gofr.Context`, and the payload must be `[]byte`: ```go func handler(c *gofr.Context) (any, error) { if err := c.GetPublisher().Publish(c, "user.created", []byte(`{"id":"1"}`)); err != nil { return nil, err } return map[string]string{"status": "queued"}, nil } ``` Subscribers (Kafka, NATS, SQS, MQTT, Google Pub/Sub, Azure Event Hub) are registered with `app.Subscribe`. ## Configuration `settings.py` and `django-environ` → `configs/.env`, with `configs/..env` overlaid on top (so `APP_ENV=production` reads `configs/.env` then `configs/.production.env` — note the dot prefix and `.env` suffix on the override file). Read keys in code with `app.Config.Get(key)`. ## Observability DRF teams typically wire `django-prometheus`, `opentelemetry-instrumentation-django`, and structlog manually. GoFr emits OpenTelemetry traces, Prometheus metrics at `/metrics`, structured JSON logs (with trace IDs), and exposes health at `/.well-known/health`. Log levels can be changed at runtime via the [remote log-level endpoint](/docs/advanced-guide/remote-log-level-change). ## Gradual adoption Stand up a GoFr service for one bounded context (e.g. notifications, search). From the Django side call it over HTTP; from GoFr call back into Django with `app.AddHTTPService("django-api", baseURL)` — circuit breaker, retries, and rate limiting included. {% faq %} {% faq-item question="Can I run Django and GoFr in the same cluster?" %} Yes. They are independent services. Use Pub/Sub topics or HTTP to bridge; GoFr's HTTP service client adds resilience automatically. {% /faq-item %} {% faq-item question="Is there a Django admin equivalent?" %} No. The CRUD surface is auto-generated via `AddRESTHandlers`, but a polished admin UI is out of scope — most teams build it separately or use a generic admin frontend pointed at the REST endpoints. {% /faq-item %} {% faq-item question="What about Celery beat schedules?" %} GoFr's built-in cron scheduler covers periodic jobs; queue-driven work moves to Pub/Sub subscribers. {% /faq-item %} {% /faq %} --- ## https://gofr.dev/migrate/from-echo # Migrate from Echo to GoFr {% answer %} Echo handlers translate to GoFr almost line-for-line. The handler signature changes from `func(c echo.Context) error` (where you call `c.JSON(status, value)`) to `func(c *gofr.Context) (any, error)` — you return the value and any error, and GoFr writes the response. Echo's `MiddlewareFunc` becomes the standard `func(http.Handler) http.Handler`. {% /answer %} {% callout title="Migrating with an AI assistant?" %} Hand [https://gofr.dev/AGENTS.md](https://gofr.dev/AGENTS.md) to your coding assistant (Claude Code, Cursor, Codex, Aider). It contains the framework conventions, routing/binding/datasource patterns, and per-framework cheat-sheets so the assistant can translate handlers without you re-explaining GoFr. {% /callout %} If you're still picking between the two, see [GoFr vs Echo](/comparison/gofr-vs-echo) for a non-migration comparison. ## Mental model Echo gives you a fast router and a thin `Context`; everything else (logging, metrics, tracing, datasource wiring, health, retries, circuit breaker on outbound calls) you assemble yourself. GoFr is a framework: the same routing surface, plus those operational pieces wired in. Migrating is mostly about deleting code you no longer need. ## Handler translation **Echo:** ```go e := echo.New() e.GET("/users/:id", func(c echo.Context) error { id := c.Param("id") user, err := db.GetUser(id) if err != nil { return echo.NewHTTPError(http.StatusNotFound, err.Error()) } return c.JSON(http.StatusOK, user) }) ``` **GoFr:** ```go app := gofr.New() app.GET("/users/{id}", func(c *gofr.Context) (any, error) { id := c.PathParam("id") return db.GetUser(id) }) ``` Note the path syntax: Echo uses `:id`, GoFr uses `{id}`. ## Request binding **Echo:** ```go var input CreateUser if err := c.Bind(&input); err != nil { return err } ``` **GoFr:** ```go var input CreateUser if err := c.Bind(&input); err != nil { return nil, err } ``` Both accept JSON, form, and multipart. Validation isn't built into either — pair with `go-playground/validator` if you want tag-based rules. ## Path, query, and header access | Operation | Echo | GoFr | |---|---|---| | Path param | `c.Param("id")` | `c.PathParam("id")` | | Query param | `c.QueryParam("q")` | `c.Param("q")` | | Header | `c.Request().Header.Get("X-Foo")` | Read headers via custom middleware (`func(http.Handler) http.Handler`) on the underlying `*http.Request`; `c.Request` is the abstract `gofr.Request` interface and does not expose `Header` directly | | Raw request | `c.Request()` | Not exposed — `c.Request` is the `gofr.Request` interface (`Param`, `PathParam`, `Bind`, `HostName`, `Params`, `Context`); reach the `*http.Request` through middleware if needed | ## Middleware **Echo:** ```go e.Use(func(next echo.HandlerFunc) echo.HandlerFunc { return func(c echo.Context) error { start := time.Now() err := next(c) log.Printf("%s took %s", c.Path(), time.Since(start)) return err } }) ``` **GoFr:** ```go app.UseMiddleware(func(next http.Handler) http.Handler { return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { start := time.Now() next.ServeHTTP(w, r) log.Printf("%s took %s", r.URL.Path, time.Since(start)) }) }) ``` In practice you rarely need this — request logging, tracing, and Prometheus metrics are already wired. ## Route groups Echo's `e.Group("/api/v1")` does not have a one-line GoFr equivalent. The most common idiom is to register a path prefix on each route, or wrap a registration helper that closes over the prefix. ## Static files and templates Echo's `e.Static` and `e.Renderer` are replaced by GoFr's static file serving — drop assets in the configured static directory. Templating libraries (text/template, html/template) work as usual inside handlers. ## Datasources Echo leaves datasource wiring to you. With GoFr, SQL and Redis are auto-initialized from environment variables — set `DB_DIALECT`, `DB_HOST`, `DB_PORT`, `DB_USER`, `DB_PASSWORD`, `DB_NAME` (or `REDIS_HOST`, `REDIS_PORT`) in `configs/.env` and `gofr.New()` wires the connection. Other clients are registered explicitly with a provider: ```go app.AddMongo(mongo.New(mongo.Config{/* ... */})) ``` You then access them via `c.SQL`, `c.Redis`, `c.Mongo` in handlers. SQL (MySQL/Postgres/Oracle/SQLite/SQL Server), Redis, Mongo, Cassandra, ScyllaDB, Couchbase, ArangoDB, Dgraph, SurrealDB are supported, with first-class migrations for SQL/Mongo/Redis/Dgraph. See [datasources](/docs/datasources). ## Observability Echo users typically integrate `echoprometheus`, `otelecho`, and a logger of their choice. With GoFr these are built-in: OpenTelemetry tracing, Prometheus metrics at `/metrics`, structured JSON logs with trace IDs, health at `/.well-known/health`, and runtime log-level changes. ## Libraries you can typically remove - `otelecho` middleware → built-in tracing. - `echoprometheus` → built-in metrics. - Hand-rolled `/healthz` → `/.well-known/health` is auto-exposed. - Custom retry / circuit-breaker code on outbound calls → `app.AddHTTPService`. ## Gradual adoption Run new endpoints in a GoFr service alongside Echo. Call back into the Echo service from GoFr through `app.AddHTTPService("legacy", baseURL)` with circuit breaker, retries, and rate limiting configured. Migrate routes in batches grouped by data dependency. {% faq %} {% faq-item question="Can I run Echo and GoFr in the same cluster?" %} Yes. They are independent Go binaries. Use `app.AddHTTPService` from the GoFr side to call the Echo service with built-in resilience. {% /faq-item %} {% faq-item question="Will my Echo middleware drop in unchanged?" %} No — the signatures differ (`echo.MiddlewareFunc` vs `func(http.Handler) http.Handler`). The logic translates directly; only the wrapper changes. {% /faq-item %} {% faq-item question="Does GoFr have a faster router than Echo?" %} Performance is comparable for most workloads — GoFr trades raw routing micro-benchmarks for built-in observability, datasources, and resilience. Pick based on what your team should not be writing themselves. {% /faq-item %} {% /faq %} --- ## https://gofr.dev/migrate/from-express # Migrate from Express (Node.js) to GoFr {% answer %} Coming from Express to GoFr is more than a framework migration — it's a language change. The mental model translates well: routing, middleware, request/response, and async I/O all have direct Go equivalents. Handlers go from `(req, res) => res.json(data)` to `func(c *gofr.Context) (any, error) { return data, nil }`. {% /answer %} {% callout title="Migrating with an AI assistant?" %} Hand [https://gofr.dev/AGENTS.md](https://gofr.dev/AGENTS.md) to your coding assistant (Claude Code, Cursor, Codex, Aider). It contains the framework conventions, routing/binding/datasource patterns, and per-framework cheat-sheets so the assistant can translate handlers without you re-explaining GoFr. {% /callout %} ## Mental model translation | Concept | Express / Node.js | GoFr / Go | |---|---|---| | Async runtime | Single-threaded event loop with `await` | Goroutines + channels (true concurrency) | | Request handler | `(req, res, next) => {}` | `func(c *gofr.Context) (any, error)` | | Middleware | `(req, res, next) => next()` | `func(http.Handler) http.Handler` | | Body parsing | `express.json()` middleware | `c.Bind(&struct)` | | Path params | `req.params.id` | `c.PathParam("id")` | | Query params | `req.query.q` | `c.Param("q")` | | JSON response | `res.json(data)` | `return data, nil` | | Error handling | `next(err)` | `return nil, err` | | Logging | Pino, Winston, Bunyan | Built into GoFr | | Tracing | `@opentelemetry/instrumentation-express` | Built into GoFr | | Database | pg, mongoose, ioredis | Built into GoFr (`c.SQL`, `c.Mongo`, `c.Redis`) | ## Hello world side-by-side **Express:** ```js import express from 'express' const app = express() app.use(express.json()) app.get('/hello', (req, res) => { res.json({ message: 'Hello, world' }) }) app.listen(8000) ``` **GoFr:** ```go package main import "gofr.dev/pkg/gofr" func main() { app := gofr.New() app.GET("/hello", func(c *gofr.Context) (any, error) { return "Hello, world", nil }) app.Run() } ``` ## Async patterns In Node, you `await` a database call. In Go, you call the function directly — concurrency is provided by goroutines, not callbacks or promises. **Express:** ```js app.get('/users/:id', async (req, res) => { const user = await db.getUser(req.params.id) res.json(user) }) ``` **GoFr:** ```go app.GET("/users/{id}", func(c *gofr.Context) (any, error) { return db.GetUser(c.PathParam("id")) }) ``` The `c` (Context) carries deadline and cancellation just like JavaScript's `AbortController`, but is automatically propagated to all DB and HTTP calls. ## What you tend to gain - **Static typing.** Request bodies, response shapes, and DB rows are typed; many Express runtime errors disappear at compile time. - **Concurrency.** Goroutines + channels handle background work without async/await chains. - **Single binary deploy.** No `node_modules`, no runtime dependency on Node version. - **Built-in production glue.** Tracing, metrics, structured logging, datasource clients — Express requires you to assemble all of this. ## Common gotchas - **No callback-style error propagation.** `next(err)` becomes `return nil, err`. Errors travel up the call stack; nothing happens implicitly. - **No `req.body` mutation.** Bind into a struct and mutate the struct. - **Goroutines leak silently if you don't `defer` cleanup.** A `defer rows.Close()` in your DB query is not optional in Go. - **JSON shape is slightly different.** GoFr wraps successful responses as `{"data": ...}`. If Express clients expect the raw object, return a wrapper. - **`process.env` becomes `app.Config.Get(key)`.** Configuration is loaded from `.env` files in the `configs/` directory by default. ## Estimated effort per service A small Express service (10-20 routes, light DB usage) typically takes 2–4 engineering days for a developer new to Go. Most of the time goes to learning Go idioms (error handling, struct composition) rather than the framework itself. ## Recommended adoption 1. Pick a small, isolated Node service to rebuild in GoFr (an internal tool, a webhook receiver). 2. Match its endpoints 1:1. 3. Run both side-by-side in your traffic split or as separate environments. 4. Migrate larger services as your team builds confidence with Go. {% faq %} {% faq-item question="Will my JSON contracts change?" %} GoFr wraps successful responses as `{"data": ...}` by default — and a plain struct returned from a handler is always wrapped. If your existing Express clients expect a different envelope (or no envelope), return one of GoFr's special response types instead: `response.Raw{Data: …}` writes the value directly with no envelope, and `response.Response` lets you control the shape. The wrapper is only bypassed when you return one of these typed responses, not when you return an arbitrary struct. {% /faq-item %} {% faq-item question="What about NestJS or Fastify users?" %} NestJS users will find GoFr's structured approach familiar (controllers map to handlers, modules to packages). Fastify users will appreciate the lower runtime overhead. {% /faq-item %} {% /faq %} --- ## https://gofr.dev/migrate/from-fastapi # Migrate from FastAPI (Python) to GoFr {% answer %} FastAPI users moving to GoFr trade `async def`/`await` for goroutines that GoFr manages on each request. Pydantic models become Go structs validated through `c.Bind(&struct)`. FastAPI's automatic OpenAPI generation maps to GoFr's built-in Swagger UI, and uvicorn is replaced by a single `gofr.New()` binary. {% /answer %} {% callout title="Migrating with an AI assistant?" %} Hand [https://gofr.dev/AGENTS.md](https://gofr.dev/AGENTS.md) to your coding assistant (Claude Code, Cursor, Codex, Aider). It contains the framework conventions, routing/binding/datasource patterns, and per-framework cheat-sheets so the assistant can translate handlers without you re-explaining GoFr. {% /callout %} ## Mental model GoFr handlers are synchronous functions, but each one runs in its own goroutine — you don't decorate handlers with `async` because the runtime already gives you concurrency for free. Where FastAPI uses `async def` + `await` to avoid blocking the event loop, GoFr blocks the goroutine and lets the Go scheduler interleave others. The result is the same shape of code as a sync FastAPI route, with throughput closer to async. Pydantic's runtime validation becomes compile-time struct typing plus tag-based validation on `Bind`. FastAPI's `Depends()` injection is replaced by passing dependencies through constructors or accessing datasources via `*gofr.Context`. ## Side-by-side: FastAPI handler ↔ GoFr handler **FastAPI:** ```python from fastapi import FastAPI from pydantic import BaseModel class CreateUser(BaseModel): name: str email: str app = FastAPI() @app.post("/users") async def create_user(payload: CreateUser): user = await db.create(payload.dict()) return user ``` **GoFr:** ```go package main import "gofr.dev/pkg/gofr" type CreateUser struct { Name string `json:"name"` Email string `json:"email"` } func main() { app := gofr.New() app.POST("/users", func(c *gofr.Context) (any, error) { var input CreateUser if err := c.Bind(&input); err != nil { return nil, err } return createUser(c, input) }) app.Run() } ``` ## Concurrency: async/await → goroutines A typical FastAPI deployment runs uvicorn workers, each one running its own event loop with cooperative async tasks. A GoFr service is a single binary; each request runs as a goroutine, and I/O calls block the goroutine without blocking the OS thread. There is no `await` keyword in user code — the framework, drivers, and HTTP/SQL clients propagate cancellation via `context.Context` (which `*gofr.Context` embeds). If you previously offloaded CPU work via `run_in_threadpool`, in Go you simply call the function: the scheduler will move blocked goroutines off the worker threads. ## Validation and OpenAPI | FastAPI | GoFr | |---|---| | Pydantic `BaseModel` | Go struct with JSON tags | | `Field(..., min_length=3)` | Use a validator library (e.g. `go-playground/validator`) on the bound struct | | Automatic OpenAPI at `/docs` | Drop your generated `openapi.json` into `static/` to serve via the built-in Swagger UI | | `response_model` | Return typed structs; the response shape is the struct | GoFr ships a Swagger UI that renders any `openapi.json` you place in the static directory — see the [Swagger documentation guide](/docs/advanced-guide/swagger-documentation). ## Dependency injection FastAPI's `Depends()` is replaced by either: - **Constructor passing** — build a struct holding your dependencies and use methods as handlers. - **`*gofr.Context`** — datasources (SQL, Redis, Mongo, Pub/Sub) are accessed through the request context, so per-request injection of those is automatic. ## Datasources FastAPI users typically reach for SQLAlchemy / Tortoise / Motor. In GoFr, SQL and Redis are auto-initialized from environment variables — set `DB_DIALECT`, `DB_HOST`, `DB_PORT`, `DB_USER`, `DB_PASSWORD`, `DB_NAME` (or `REDIS_HOST`, `REDIS_PORT`) in `configs/.env` and `gofr.New()` wires the connection for you. Other clients are registered explicitly with a provider, e.g.: ```go app.AddMongo(mongo.New(mongo.Config{/* ... */})) ``` Then access them inside the handler via `c.SQL`, `c.Redis`, `c.Mongo`. GoFr supports SQL (MySQL/Postgres/Oracle/SQLite/SQL Server), MongoDB, Redis, Cassandra, ScyllaDB, Couchbase, ArangoDB, Dgraph and SurrealDB. SQL/Mongo/Redis/Dgraph migrations are first-class — see the [datasources reference](/docs/datasources). ## Observability FastAPI users typically wire `opentelemetry-instrumentation-fastapi` and `prometheus-fastapi-instrumentator` themselves. GoFr emits OpenTelemetry traces, Prometheus metrics at `/metrics`, and structured JSON logs (with trace IDs) by default. Health is exposed at `/.well-known/health`. Log levels are changeable at runtime via the [remote log-level endpoint](/docs/advanced-guide/remote-log-level-change). ## Gradual adoption Run your FastAPI service alongside a new GoFr microservice and call it from GoFr using the built-in HTTP client with circuit breaker + retry + rate limiting: ```go app.AddHTTPService("legacy-api", "http://legacy-fastapi:8000") ``` Move endpoints over progressively, repointing your gateway/load balancer until the old service can be retired. {% faq %} {% faq-item question="Can I run FastAPI and GoFr in the same cluster?" %} Yes. They are independent processes. GoFr can call your FastAPI service through `app.AddHTTPService` with circuit breaker, retries, and rate limiting configured. {% /faq-item %} {% faq-item question="Is there an equivalent of Pydantic's strict validation?" %} GoFr binds JSON, form, and multipart bodies into structs, but doesn't ship a validator. Most teams pair `c.Bind` with `go-playground/validator` for tag-based validation. {% /faq-item %} {% faq-item question="Where do background tasks (FastAPI's BackgroundTasks) go?" %} Use goroutines for fire-and-forget work scoped to the request, GoFr's cron jobs for scheduled work, or Pub/Sub subscribers (Kafka, NATS, SQS, MQTT, Google Pub/Sub, Azure Event Hub) for queue-based jobs. {% /faq-item %} {% /faq %} --- ## https://gofr.dev/migrate/from-fiber # Migrate from Fiber to GoFr {% answer %} Migrating from Fiber to GoFr also moves you from `fasthttp` to `net/http`. This is usually a simplification — `net/http`-compatible libraries become directly usable, and middleware translation is straightforward. Handlers go from `func(c *fiber.Ctx) error` to `func(c *gofr.Context) (any, error)`. {% /answer %} {% callout title="Migrating with an AI assistant?" %} Hand [https://gofr.dev/AGENTS.md](https://gofr.dev/AGENTS.md) to your coding assistant (Claude Code, Cursor, Codex, Aider). It contains the framework conventions, routing/binding/datasource patterns, and per-framework cheat-sheets so the assistant can translate handlers without you re-explaining GoFr. {% /callout %} ## Handler translation **Fiber:** ```go app.Get("/users/:id", func(c *fiber.Ctx) error { id := c.Params("id") user, err := db.GetUser(id) if err != nil { return c.Status(404).JSON(fiber.Map{"error": err.Error()}) } return c.JSON(user) }) ``` **GoFr:** ```go app.GET("/users/{id}", func(c *gofr.Context) (any, error) { id := c.PathParam("id") user, err := db.GetUser(id) return user, err }) ``` ## Request body and params | Operation | Fiber | GoFr | |---|---|---| | Path param | `c.Params("id")` | `c.PathParam("id")` | | Query param | `c.Query("q")` | `c.Param("q")` | | Body parse | `c.BodyParser(&input)` | `c.Bind(&input)` | ## Middleware Fiber middleware is `fiber.Handler`. GoFr middleware is the standard `net/http` `func(http.Handler) http.Handler`. Most third-party `net/http` middleware works directly with GoFr. ## Observability and datasources This is where the migration pays off most. In Fiber: - Tracing → install `otelfiber`, configure exporter, propagate spans manually for DB calls. - Metrics → install `fiber/v2/middleware/monitor` or expose Prometheus separately. - Database → use `database/sql` or driver of choice; instrument it yourself. In GoFr: - Tracing, metrics, and structured logging are emitted by default. - DB clients (`c.SQL`, `c.Redis`, `c.Mongo`, etc.) are auto-instrumented with span correlation. ## net/http compatibility If your Fiber service used `adaptor.HTTPHandler` to wrap `net/http` middleware, those adapters become unnecessary in GoFr — `net/http` is native. Drop them. ## Common gotchas - **fasthttp libraries don't work with `net/http`.** If you depend on `valyala/fasthttp`-specific packages, plan to swap each for a `net/http` equivalent. - **`c.Locals` has no direct equivalent.** `*gofr.Context` does not expose `Set` / `Get` methods for per-request locals. Either pass values through Go closures, or — since `*gofr.Context` embeds `context.Context` — use `context.WithValue(c, key, value)` and retrieve with `c.Value(key)`. - **`adaptor.HTTPHandler`** wrappers you used to call `net/http` middleware from Fiber are now unnecessary — drop them. - **Streaming response patterns differ.** GoFr does not ship a built-in SSE responder; for raw streaming, write to the underlying `http.ResponseWriter` from a custom middleware. - **Compression / static-file middleware** that you composed in Fiber needs to be re-added explicitly in GoFr if you relied on it. ## Estimated effort A typical Fiber-based REST service migrates in 1–2 engineering days. The biggest unknown is whether any of your dependencies are fasthttp-only. ## Recommended order 1. Migrate one new service to GoFr first. 2. Validate datasource clients connect to existing databases. 3. Confirm OTel traces and Prometheus metrics reach existing collectors. 4. Migrate remaining services as you touch them. {% faq %} {% faq-item question="Does GoFr support Fiber's request lifecycle features (Locals, etc.)?" %} There is no `c.Locals`-style per-request locals API on `*gofr.Context`. Pass values through closures, or use the standard `context.Context` mechanism — `*gofr.Context` embeds `context.Context`, so `context.WithValue(c, key, value)` and `c.Value(key)` work. {% /faq-item %} {% faq-item question="Can I keep using fasthttp libraries with GoFr?" %} No — GoFr is `net/http`-based. Libraries written for fasthttp won't work directly. Most have `net/http`-equivalent alternatives. {% /faq-item %} {% /faq %} --- ## https://gofr.dev/migrate/from-flask # Migrate from Flask (Python) to GoFr {% answer %} Flask developers tend to like GoFr because both are minimal in the right places — small, opinionated cores with sensible defaults. Flask's `@app.route` decorator becomes `app.GET("/path", handler)`. Request access via `request.json` becomes `c.Bind(&struct)`. {% /answer %} {% callout title="Migrating with an AI assistant?" %} Hand [https://gofr.dev/AGENTS.md](https://gofr.dev/AGENTS.md) to your coding assistant (Claude Code, Cursor, Codex, Aider). It contains the framework conventions, routing/binding/datasource patterns, and per-framework cheat-sheets so the assistant can translate handlers without you re-explaining GoFr. {% /callout %} ## Mental model translation | Concept | Flask / Python | GoFr / Go | |---|---|---| | Concurrency | WSGI process / thread workers | Goroutines (one process, true concurrency) | | Route | `@app.route('/users/')` | `app.GET("/users/{id}", handler)` | | Request body | `request.get_json()` | `c.Bind(&struct)` | | Path param | `def view(id):` (function arg) | `c.PathParam("id")` | | Query param | `request.args.get('q')` | `c.Param("q")` | | Response | `return jsonify(data), 200` | `return data, nil` | | Error response | `abort(404)` | `return nil, fmt.Errorf("not found")` | | Logging | `logging` + structlog | Built-in GoFr structured logging | | Tracing | OpenTelemetry Python instrumentation | Built into GoFr | | Database | SQLAlchemy / psycopg / pymongo | Built-in clients | | Background jobs | Celery / RQ | GoFr cron, Pub/Sub subscribers | ## Hello world **Flask:** ```python from flask import Flask, jsonify app = Flask(__name__) @app.route('/hello') def hello(): return jsonify(message="Hello, world") if __name__ == '__main__': app.run(port=8000) ``` **GoFr:** ```go package main import "gofr.dev/pkg/gofr" func main() { app := gofr.New() app.GET("/hello", func(c *gofr.Context) (any, error) { return "Hello, world", nil }) app.Run() } ``` ## Concurrency: from gunicorn workers to goroutines A typical Flask deployment runs gunicorn with N worker processes (or threads). Each request occupies one worker for its duration, including I/O wait. Scaling is by adding processes / replicas. A GoFr service is a single binary. Each request is a goroutine. I/O is non-blocking. You typically need fewer instances at the same throughput. ## What you can drop - `python-json-logger` / structlog config → built-in. - `flask-prometheus-metrics` → built-in. - `opentelemetry-instrumentation-flask` → built-in. - Custom DB connection pooling on top of SQLAlchemy → handled by GoFr's SQL client. - `flask-healthz` / hand-rolled `/healthz` → auto-exposed at `/.well-known/health`. ## Common gotchas - **No global `request`.** The handler receives a `*Context` parameter; pass it where you need it. Goroutines + a goroutine-local `request` don't mix in Go. - **`@app.errorhandler(Exception)` becomes explicit error returns.** Every error travels back as the second return value. - **Database sessions aren't `flask-sqlalchemy`.** GoFr's SQL client gives you a connection pool with raw queries; pair with `sqlc` for type-safe queries if you want ORM-like ergonomics. - **Decorators don't translate.** `@app.before_request` becomes middleware; `@app.errorhandler` becomes explicit error mapping in your handlers. - **`abort(404)` becomes `return nil, err` where `err` is a typed error that implements `StatusCode() int`.** A plain `error` serializes as a 500. Return one of GoFr's built-in error types (`http.ErrorEntityNotFound`, etc.) or define your own type satisfying the `StatusCode()` interface so the responder picks up the right HTTP code. See [Error Handling](/docs/advanced-guide/gofr-errors). ## Estimated effort per service A small Flask service (10-20 routes) typically takes 2–4 engineering days for a Python developer new to Go. Most of the time is spent on Go idioms. ## Recommended adoption 1. Pick a small Flask service (an internal webhook, a CRUD API) and rebuild it in GoFr. 2. Run side-by-side, validate observability output. 3. Iterate — port more services as your team gains comfort. {% faq %} {% faq-item question="Are there async equivalents of Quart / FastAPI in Go?" %} Go's concurrency primitives mean you don't need an async/await separation — every handler runs in its own goroutine, and I/O is non-blocking by default. GoFr fits this model. {% /faq-item %} {% faq-item question="Does GoFr have an ORM like SQLAlchemy?" %} No. GoFr's SQL client provides connection pooling, observability, and parameter binding, not an ORM. Many Go teams use `sqlc` for type-safe queries; some use `gorm`. Both work fine inside GoFr handlers. {% /faq-item %} {% faq-item question="Can I run Celery-style background jobs in GoFr?" %} Yes — GoFr has built-in cron scheduling and Pub/Sub subscribers (Kafka, NATS, Google Pub/Sub, MQTT, SQS, Azure Event Hub). Combined, these cover most Celery use cases. {% /faq-item %} {% /faq %} --- ## https://gofr.dev/migrate/from-gin # Migrate from Gin to GoFr {% answer %} Gin handlers translate to GoFr cleanly. The biggest mental shift is the handler signature: `func(c *gin.Context)` becomes `func(c *gofr.Context) (any, error)` — you return data and an error instead of calling `c.JSON(status, value)`. Middleware uses the standard `net/http` signature instead of Gin's `gin.HandlerFunc`. {% /answer %} {% callout title="Migrating with an AI assistant?" %} Hand [https://gofr.dev/AGENTS.md](https://gofr.dev/AGENTS.md) to your coding assistant (Claude Code, Cursor, Codex, Aider). It contains the framework conventions, routing/binding/datasource patterns, and per-framework cheat-sheets so the assistant can translate handlers without you re-explaining GoFr. {% /callout %} ## Handler translation **Gin:** ```go r.GET("/users/:id", func(c *gin.Context) { id := c.Param("id") user, err := db.GetUser(id) if err != nil { c.JSON(404, gin.H{"error": err.Error()}) return } c.JSON(200, user) }) ``` **GoFr:** ```go app.GET("/users/{id}", func(c *gofr.Context) (any, error) { id := c.PathParam("id") user, err := db.GetUser(id) if err != nil { return nil, err } return user, nil }) ``` ## Request binding **Gin:** ```go var input CreateUser if err := c.ShouldBindJSON(&input); err != nil { c.JSON(400, gin.H{"error": err.Error()}) return } ``` **GoFr:** ```go var input CreateUser if err := c.Bind(&input); err != nil { return nil, err } ``` ## Query and path parameters | Operation | Gin | GoFr | |---|---|---| | Path param | `c.Param("id")` | `c.PathParam("id")` | | Query param | `c.Query("q")` | `c.Param("q")` | | Default query | `c.DefaultQuery("page", "1")` | `c.Param("page")` (handle empty case) | ## Middleware **Gin:** ```go r.Use(func(c *gin.Context) { start := time.Now() c.Next() log.Printf("%s took %s", c.Request.URL.Path, time.Since(start)) }) ``` **GoFr:** ```go app.UseMiddleware(func(next http.Handler) http.Handler { return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { start := time.Now() next.ServeHTTP(w, r) log.Printf("%s took %s", r.URL.Path, time.Since(start)) }) }) ``` In practice you rarely need this in GoFr — request logging, tracing, and metrics are built in. ## Libraries you can typically remove After moving to GoFr, several Gin-side helpers usually become unnecessary because the framework already includes equivalents — keep whatever you'd still rather wire yourself: - `otelgin` middleware → built-in tracing. - `gin-prometheus` → built-in metrics at `/metrics`. - `zap-gin` request logging → built-in structured logging with trace IDs. - Manual `db.Ping()` / health endpoints → auto-exposed at `/.well-known/health`. - Custom retry / circuit-breaker code on outbound HTTP calls → `app.AddHTTPService` with config. ## Common gotchas - **`c.MustGet` has no direct equivalent.** Use `c.Get(key)` and handle the missing-value case explicitly. - **Gin's middleware ordering matters at registration time.** GoFr's default observability middleware runs before your custom `UseMiddleware` chain — assume tracing and metrics are already wired by the time your code runs. - **Response wrapping is different.** GoFr returns `{"data": ...}` on success and `{"error": ...}` on error, and a plain struct returned from a handler is always wrapped. If your existing clients expect the raw object, return one of GoFr's special response types — `response.Raw{Data: …}` writes the payload directly with no envelope, and `response.Response` lets you control the shape. Returning an arbitrary struct does not bypass the envelope. - **No `gin.H{}`.** Use plain `map[string]any{}` or, better, named structs. - **Validation isn't built in.** Gin uses `binding:"required"` tags via go-playground/validator by default. With GoFr, pick your validator explicitly. ## Estimated effort A typical 5-10 endpoint Gin service migrates in 1–2 engineering days. Most of the time goes to validating that observability output (traces, metrics) lands in your existing stack with the right names — not to handler translation. ## Recommended order 1. Move one endpoint to GoFr in a new file/service. 2. Validate observability (traces and metrics) reach your existing collectors. 3. Port remaining endpoints in batches grouped by data dependency. 4. Drop now-redundant Gin middleware libraries. 5. Decommission the old service when traffic has shifted. {% faq %} {% faq-item question="What happens to my existing tests?" %} GoFr provides testing utilities — see the [testing reference](/docs/references/testing). Most Gin tests rewrite naturally because the handler logic is similar; the test setup changes. {% /faq-item %} {% faq-item question="Does GoFr support all of Gin's binding tags?" %} GoFr's Bind handles JSON, form, and multipart. Validation is left to the choice of library (e.g., go-playground/validator on bound structs). {% /faq-item %} {% /faq %} --- ## https://gofr.dev/migrate/from-laravel # Migrate from Laravel (PHP) to GoFr {% answer %} Laravel devs moving to GoFr trade Eloquent and the Service Container for explicit SQL and constructor passing — and gain a static binary, built-in observability, and goroutine concurrency. Routes, controllers, middleware, validation, queues, and CLI commands all have direct GoFr analogues; the `.env` file even keeps its name. {% /answer %} {% callout title="Migrating with an AI assistant?" %} Hand [https://gofr.dev/AGENTS.md](https://gofr.dev/AGENTS.md) to your coding assistant (Claude Code, Cursor, Codex, Aider). It contains the framework conventions, routing/binding/datasource patterns, and per-framework cheat-sheets so the assistant can translate handlers without you re-explaining GoFr. {% /callout %} ## Mental model | Laravel | GoFr | |---|---| | `Route::get('/users/{id}', ...)` | `app.GET("/users/{id}", handler)` | | Controller method | Handler function (or method on a struct) | | `$request->input('name')` | `var dto CreateUser; c.Bind(&dto)` | | Form Request validation | Struct tags + a validator library | | Middleware (`Kernel.php`) | `app.UseMiddleware(...)` | | Service Container / `app()` | Constructor passing of dependencies | | Eloquent ORM | SQL drivers (`c.SQL`); pair with `sqlc` or `gorm` for ergonomics | | Migrations (`php artisan migrate`) | GoFr SQL migrations | | Artisan commands | GoFr CLI / sub-commands | | Queues (database/Redis/SQS) | GoFr Pub/Sub (Kafka, NATS, SQS, MQTT, Google Pub/Sub, Azure Event Hub) | | Scheduler (`Kernel::schedule`) | `app.AddCronJob(...)` | | `.env` | `configs/.env` | | Telescope / Horizon dashboards | Prometheus metrics + traces in your existing stack | | Sanctum / Passport | Built-in Basic / APIKey / OAuth-JWT + RBAC | ## Side-by-side: controller ↔ handler **Laravel:** ```php class UserController extends Controller { public function store(Request $request) { $data = $request->validate([ 'name' => 'required|min:3', 'email' => 'required|email', ]); return User::create($data); } } Route::post('/users', [UserController::class, 'store']); ``` **GoFr:** ```go type CreateUser struct { Name string `json:"name" validate:"required,min=3"` Email string `json:"email" validate:"required,email"` } app.POST("/users", func(c *gofr.Context) (any, error) { var dto CreateUser if err := c.Bind(&dto); err != nil { return nil, err } return createUser(c, dto) }) ``` ## Auto-CRUD via AddRESTHandlers If your Laravel resource is "controller + Eloquent model + standard CRUD", you can collapse it in GoFr to: ```go if err := app.AddRESTHandlers(&User{}); err != nil { app.Logger().Fatal(err) } ``` — which exposes `GET / POST / GET/{id} / PUT/{id} / DELETE/{id}` against your struct/table. See the [REST scaffolding guide](/docs/quick-start/add-rest-handlers). ## Validation Laravel's Form Requests collapse parsing + validating into one. In GoFr it's two steps: - `c.Bind(&dto)` — parse JSON / form / multipart. - A validator library (e.g. `go-playground/validator`) — apply struct-tag rules. The trade-off is more explicit code, less magic. ## Middleware Laravel's `Kernel.php` middleware groups translate to: ```go app.UseMiddleware(authMiddleware) app.UseMiddleware(rateLimiter) ``` Authentication options ship in GoFr (Basic, API Key, OAuth-JWT — see [authentication](/docs/advanced-guide/authentication)) and you can layer RBAC on top. ## Eloquent → SQL drivers This is the biggest shift. GoFr does not include an ORM. Replace Eloquent calls with explicit SQL via `c.SQL.Query` / `Exec`, and pair with `sqlc` for generated type-safe queries or `gorm` for ORM-like ergonomics. Migrations move from `php artisan make:migration` to versioned [GoFr SQL migrations](/docs/advanced-guide/handling-data-migrations) — files applied in order at boot. ## Queues → Pub/Sub Laravel queues backed by Redis / database / SQS map to GoFr's Pub/Sub: ```go app.Subscribe("user.created", func(c *gofr.Context) error { var msg UserCreated if err := c.Bind(&msg); err != nil { return err } return process(c, msg) }) ``` Supported backends: Kafka, NATS, SQS, MQTT, Google Pub/Sub, Azure Event Hub. Publish from inside a handler — `GetPublisher` is on `*gofr.Context`, and the payload must be `[]byte`: ```go func handler(c *gofr.Context) (any, error) { if err := c.GetPublisher().Publish(c, "user.created", []byte(`{"id":"1"}`)); err != nil { return nil, err } return map[string]string{"status": "queued"}, nil } ``` ## Artisan → GoFr CLI Laravel's Artisan commands (cleanup jobs, data backfills, one-off scripts) map onto GoFr's CLI / sub-command support — register sub-commands on the same app and invoke as `./mybinary `. See the [CLI command guide](/docs/advanced-guide/building-cli-applications). For periodic work, use `app.AddCronJob(schedule, jobName, fn)` (three arguments) instead of `php artisan schedule:run`, e.g. `app.AddCronJob("0 * * * *", "hourly-cleanup", func(ctx *gofr.Context) { /* ... */ })`. ## Datasources GoFr auto-initializes SQL and Redis from environment variables — set `DB_DIALECT`, `DB_HOST`, `DB_PORT`, `DB_USER`, `DB_PASSWORD`, `DB_NAME` (or `REDIS_HOST`, `REDIS_PORT`) in `configs/.env` and `gofr.New()` wires the connection. Other clients are registered explicitly with a provider: ```go app.AddMongo(mongo.New(mongo.Config{/* ... */})) ``` SQL (MySQL/Postgres/Oracle/SQLite/SQL Server), Redis, Mongo, Cassandra, ScyllaDB, Couchbase, ArangoDB, Dgraph, SurrealDB are supported. File storage drivers cover Local, S3, GCS, Azure Blob, FTP, SFTP — useful when porting Laravel filesystem disks. ## Configuration `.env` — same name, slightly different conventions. GoFr reads `configs/.env`, with environment-specific files (`configs/.env.production`) layered on via `APP_ENV`. Read in code with `app.Config.Get(key)`. ## Observability Telescope and Horizon are application-bundled dashboards; GoFr instead exports OpenTelemetry traces and Prometheus metrics at `/metrics` to whatever stack you already run (Grafana, Datadog, Honeycomb, etc.). Structured JSON logs include trace IDs. Health is exposed at `/.well-known/health`. Log levels are changeable at runtime. ## Gradual adoption Pick a bounded context (notifications, search, file processing) and rebuild it as a GoFr service. From Laravel call it over HTTP; from GoFr call back into Laravel with `app.AddHTTPService("laravel-api", baseURL)` — circuit breaker, retries, and rate limiting included. {% faq %} {% faq-item question="Can I run Laravel and GoFr in the same cluster?" %} Yes. They are independent services. Bridge via HTTP (with GoFr's resilient HTTP client) or via Pub/Sub topics shared with Laravel queue workers (e.g. SQS). {% /faq-item %} {% faq-item question="Is there a Blade equivalent?" %} GoFr is API-first and doesn't ship a templating engine. For server-rendered HTML, Go's `html/template` works inside handlers, but most teams pair GoFr with a separate frontend. {% /faq-item %} {% faq-item question="What about Laravel Echo / WebSockets?" %} GoFr supports WebSocket directly. Laravel Echo's broadcast pattern translates to a Pub/Sub backend fanning out to GoFr WebSocket connections. {% /faq-item %} {% /faq %} --- ## https://gofr.dev/migrate/from-nestjs # Migrate from NestJS to GoFr {% answer %} NestJS teams moving to GoFr keep the same architectural shape — controllers, services, validation, microservices — but lose the decorator metaphor. Controllers become plain handler functions, modules become Go packages with explicit constructor wiring, DTO classes become Go structs validated via `c.Bind`, and the `@nestjs/microservices` transports map onto GoFr's built-in Pub/Sub. {% /answer %} {% callout title="Migrating with an AI assistant?" %} Hand [https://gofr.dev/AGENTS.md](https://gofr.dev/AGENTS.md) to your coding assistant (Claude Code, Cursor, Codex, Aider). It contains the framework conventions, routing/binding/datasource patterns, and per-framework cheat-sheets so the assistant can translate handlers without you re-explaining GoFr. {% /callout %} ## Mental model NestJS leans on TypeScript decorators and a runtime DI container assembled from `@Module` metadata. Go has neither decorators nor a Nest-style DI container, so the structure becomes more explicit: | NestJS | GoFr | |---|---| | `@Controller('/users')` + `@Get(':id')` | `app.GET("/users/{id}", handler)` | | `@Body() dto: CreateUserDto` | `var dto CreateUser; c.Bind(&dto)` | | `@Param('id')` | `c.PathParam("id")` | | `@Query('q')` | `c.Param("q")` | | `@Module` + provider injection | Constructor passing of dependencies | | `Pipes` (validation, transform) | Struct tags + a validator library | | `Interceptors` / `Guards` | GoFr middleware | | `@nestjs/microservices` (TCP/Redis/NATS/Kafka) | `app.Subscribe("topic", handler)` over Kafka, NATS, SQS, MQTT, Google Pub/Sub, Azure Event Hub | | `@nestjs/swagger` | Built-in Swagger UI from your `openapi.json` | | `@nestjs/typeorm`, `@nestjs/mongoose` | SQL auto-initialized from `DB_DIALECT`/`DB_HOST`/etc. env vars; `app.AddMongo(provider)` for Mongo, plus GoFr migrations | | `@nestjs/schedule` (`@Cron`) | `app.AddCronJob(...)` | | `@nestjs/terminus` health | `/.well-known/health` (auto) | ## Side-by-side: controller ↔ handler **NestJS:** ```ts @Controller('users') export class UsersController { constructor(private readonly users: UsersService) {} @Post() async create(@Body() dto: CreateUserDto) { return this.users.create(dto); } } ``` **GoFr:** ```go type UsersHandler struct { Users UsersService } func (h *UsersHandler) Create(c *gofr.Context) (any, error) { var dto CreateUser if err := c.Bind(&dto); err != nil { return nil, err } return h.Users.Create(c, dto) } func main() { app := gofr.New() h := &UsersHandler{Users: NewUsersService()} app.POST("/users", h.Create) app.Run() } ``` ## Validation and DTOs NestJS pairs `class-validator` decorators with a `ValidationPipe`. In GoFr, a DTO is a Go struct with JSON tags; tag-based validation is added by pairing `c.Bind` with `go-playground/validator` (or any validator of your choice). ```go type CreateUser struct { Name string `json:"name" validate:"required,min=3"` Email string `json:"email" validate:"required,email"` } ``` ## Auto-CRUD via AddRESTHandlers If you have a typical "Nest CRUD module" — controller + service + entity + repository — GoFr can generate the full CRUD surface for an entity with [`AddRESTHandlers`](/docs/quick-start/add-rest-handlers). One method registers `GET / POST / GET/{id} / PUT/{id} / DELETE/{id}` against your model. ## Microservices and Pub/Sub `@nestjs/microservices` transports map cleanly: | Nest transport | GoFr equivalent | |---|---| | Kafka | Built-in Kafka subscriber/publisher | | NATS | Built-in NATS subscriber/publisher | | Redis Pub/Sub | Use Redis client as datasource | | RabbitMQ (Nest's `Transport.RMQ`) | Not built into GoFr — use Kafka, NATS, SQS, MQTT, Google Pub/Sub, or Azure Event Hub instead, or bridge via a community driver | | MQTT | Built-in MQTT subscriber | Subscribe pattern: ```go app.Subscribe("user.created", func(c *gofr.Context) error { var msg UserCreated if err := c.Bind(&msg); err != nil { return err } return process(c, msg) }) ``` ## gRPC For Nest's `@GrpcMethod` setups, GoFr supports gRPC servers and interceptors directly — see the [gRPC guide](/docs/advanced-guide/grpc). ## Configuration `@nestjs/config` (`.env` + schema) → GoFr loads `configs/.env` (with environment overrides) by default. Read at runtime with `app.Config.Get(key)`. ## Observability `@nestjs/terminus`, `@willsoto/nestjs-prometheus`, and OpenTelemetry instrumentation are typically wired by hand. GoFr ships OpenTelemetry tracing, Prometheus metrics at `/metrics`, structured JSON logs, `/.well-known/health`, and runtime log-level change. ## Gradual adoption Stand up a GoFr microservice that owns one bounded context. From the Nest side, call it via HTTP or share a Pub/Sub topic. From GoFr, call back into Nest with `app.AddHTTPService("nest-api", baseURL)` — circuit breaker, retries, and rate limiting are configured per service. {% faq %} {% faq-item question="Can I run NestJS and GoFr in the same cluster?" %} Yes. They are independent processes. Pub/Sub topics and HTTP contracts bridge the two; GoFr's outbound HTTP client adds circuit breaker and retries automatically. {% /faq-item %} {% faq-item question="Is there a Nest-style CLI scaffolder?" %} GoFr provides `AddRESTHandlers` for entity-driven CRUD scaffolding. There isn't a per-resource generator CLI; most teams use editor templates or copy a sample handler. {% /faq-item %} {% faq-item question="Do I lose decorator-driven Swagger?" %} You give up decorator-driven generation, but GoFr serves a built-in Swagger UI from any `openapi.json` you place in the static directory. {% /faq-item %} {% /faq %} --- ## https://gofr.dev/migrate/from-rails # Migrate from Rails (Ruby) to GoFr {% answer %} Rails and GoFr both lean opinionated, but Rails is opinionated about full-stack web apps and GoFr is opinionated about microservices. The mapping is operational rather than line-for-line: controllers become handler functions, ActiveRecord becomes explicit SQL, Active Job becomes Pub/Sub, Action Cable becomes WebSocket — and a lot of Rails magic ("convention over configuration") disappears in favor of explicit Go code with sensible framework defaults. {% /answer %} {% callout title="Migrating with an AI assistant?" %} Hand [https://gofr.dev/AGENTS.md](https://gofr.dev/AGENTS.md) to your coding assistant (Claude Code, Cursor, Codex, Aider). It contains the framework conventions, routing/binding/datasource patterns, and per-framework cheat-sheets so the assistant can translate handlers without you re-explaining GoFr. {% /callout %} ## Mental model Rails earns its reputation by making the common case very short — a `scaffold` produces routes, controller, model, views, migrations, tests in one command. GoFr is opinionated in the same spirit but for a different shape of system: you are not building a server-rendered web app, you are building one of many small services that talk over HTTP / gRPC / Pub/Sub. The routing surface is smaller, the controllers are simpler, and the framework leans hard on built-in observability and resilience instead of view rendering. | Rails | GoFr | |---|---| | `routes.rb` | `app.GET/POST/...` calls in `main` | | Controller action | Handler function (or method on struct) | | `params[:id]` | `c.PathParam("id")` | | `params[:q]` | `c.Param("q")` | | Strong Parameters | `c.Bind(&dto)` into a typed struct | | ActiveRecord | SQL drivers (`c.SQL`); pair with `sqlc` or `gorm` for ergonomics | | Migrations (`db/migrate/*`) | GoFr SQL migrations | | Concerns / before_action | Middleware / composition | | Active Job (Sidekiq, etc.) | GoFr Pub/Sub subscribers | | Action Cable | GoFr WebSocket | | Action Mailer | A mail library called from a handler or subscriber | | `rails console` | No direct equivalent — write a CLI sub-command for one-off tasks | | `config/database.yml` + `secrets.yml` | `configs/.env` (+ per-env files) | | Puma workers | One Go binary, goroutine-per-request | ## Side-by-side: controller ↔ handler **Rails:** ```ruby class UsersController < ApplicationController def create user = User.create!(user_params) render json: user, status: :created end private def user_params params.require(:user).permit(:name, :email) end end ``` **GoFr:** ```go type CreateUser struct { Name string `json:"name"` Email string `json:"email"` } app.POST("/users", func(c *gofr.Context) (any, error) { var dto CreateUser if err := c.Bind(&dto); err != nil { return nil, err } return createUser(c, dto) }) ``` ## Auto-CRUD via AddRESTHandlers For the "scaffold User name:string email:string" case, GoFr offers: ```go if err := app.AddRESTHandlers(&User{}); err != nil { // GET, POST, GET/{id}, PUT/{id}, DELETE/{id} app.Logger().Fatal(err) } ``` See the [REST scaffolding guide](/docs/quick-start/add-rest-handlers). For non-CRUD actions (Rails' `member` / `collection` routes), write a plain handler. ## ActiveRecord → SQL drivers The biggest mental shift. GoFr does not include an ORM. Replace ActiveRecord calls with explicit SQL via `c.SQL.Query` / `Exec`, optionally generated by `sqlc`. Lazy associations, scopes, and `includes(:posts)` are not implicit — write the JOIN, or two queries, deliberately. Migrations move from `db/migrate/2024..._create_users.rb` to versioned [GoFr SQL migrations](/docs/advanced-guide/handling-data-migrations) — files applied in order at boot. SQL (MySQL/Postgres/Oracle/SQLite/SQL Server), MongoDB, Redis, Cassandra, ScyllaDB, Couchbase, ArangoDB, Dgraph, SurrealDB are supported. ## Active Job → Pub/Sub Rails background jobs (Sidekiq / Resque / GoodJob) map to GoFr Pub/Sub subscribers: ```go app.Subscribe("user.welcome", func(c *gofr.Context) error { var msg WelcomeJob if err := c.Bind(&msg); err != nil { return err } return sendWelcome(c, msg) }) ``` Backends: Kafka, NATS, SQS, MQTT, Google Pub/Sub, Azure Event Hub. Publish from inside a handler — `GetPublisher` is on `*gofr.Context`, and the payload must be `[]byte`: ```go func handler(c *gofr.Context) (any, error) { if err := c.GetPublisher().Publish(c, "user.welcome", []byte(`{"id":"1"}`)); err != nil { return nil, err } return map[string]string{"status": "queued"}, nil } ``` For periodic jobs (cron-style), use `app.AddCronJob(schedule, jobName, fn)` — three arguments, e.g. `app.AddCronJob("0 * * * *", "hourly-report", reportFn)`. ## Action Cable → WebSocket GoFr supports WebSocket directly — register a WS handler on the app, manage connections, broadcast through your own routing or via Pub/Sub fan-out. ## Concerns and before_action Rails' before_action hooks and Concerns translate to two patterns: - **Cross-cutting** (auth, rate limiting, logging) — `app.UseMiddleware(...)`. - **Per-handler** — wrap the handler, or add a small helper called at the top of each handler. Authentication options (Basic, API Key, OAuth-JWT) and RBAC are built in. ## Configuration `config/database.yml`, `secrets.yml`, and Rails credentials → `configs/.env`, with `configs/.env.production` etc. layered on via `APP_ENV`. Read with `app.Config.Get(key)`. ## CLI tasks (`rake`, generators) There is no Rails console, but for one-off tasks (data backfills, admin actions) GoFr supports CLI sub-commands on the same binary — see the [CLI command guide](/docs/advanced-guide/building-cli-applications). Register a sub-command and invoke as `./mybinary `. ## Observability Rails teams typically wire `prometheus-client`, `opentelemetry-instrumentation-rails`, and a logger by hand. GoFr emits OpenTelemetry traces, Prometheus metrics at `/metrics`, structured JSON logs (with trace IDs), and `/.well-known/health` automatically. Log levels can be changed at runtime via the [remote log-level endpoint](/docs/advanced-guide/remote-log-level-change). ## Gradual adoption Pick a bounded context — webhook receiver, notification service, search — and rebuild it in GoFr. Run alongside Rails, route via your gateway. From GoFr call back into Rails with `app.AddHTTPService("rails", baseURL)` — circuit breaker, retries, and rate limiting included. Move endpoints across one bounded context at a time. {% faq %} {% faq-item question="Can I run Rails and GoFr in the same cluster?" %} Yes. They are independent processes. Bridge via HTTP (`app.AddHTTPService` adds resilience) or via Pub/Sub topics shared between Rails Active Job and GoFr subscribers (e.g. SQS). {% /faq-item %} {% faq-item question="Is there a Rails-style asset pipeline?" %} No — GoFr is API-first. For server-rendered HTML, Go's `html/template` is available inside handlers, but the typical pairing is GoFr for the API and a separate frontend (Next.js, etc.). {% /faq-item %} {% faq-item question="What about RSpec / system tests?" %} GoFr provides testing utilities you can use with Go's `testing` package — see the [testing reference](/docs/references/testing). Unit-test handlers as functions; integration-test against a running app instance. {% /faq-item %} {% /faq %} --- ## https://gofr.dev/migrate/from-spring-boot # Migrate from Spring Boot (Java) to GoFr {% answer %} Spring Boot developers tend to feel at home in GoFr — both frameworks share an opinionated, batteries-included philosophy. Spring's controllers map to GoFr handlers; Spring's auto-configuration maps to GoFr's defaults; Spring's Actuator endpoints map to GoFr's built-in `/.well-known/health` and `/metrics`. {% /answer %} {% callout title="Migrating with an AI assistant?" %} Hand [https://gofr.dev/AGENTS.md](https://gofr.dev/AGENTS.md) to your coding assistant (Claude Code, Cursor, Codex, Aider). It contains the framework conventions, routing/binding/datasource patterns, and per-framework cheat-sheets so the assistant can translate handlers without you re-explaining GoFr. {% /callout %} ## Mental model translation | Spring Boot | GoFr | |---|---| | `@RestController` + `@RequestMapping` | `app.GET("/path", handler)` | | `@PathVariable` | `c.PathParam("id")` | | `@RequestParam` | `c.Param("q")` | | `@RequestBody` | `c.Bind(&struct)` | | `@Autowired` field injection | Struct fields populated via constructor or via GoFr's container | | `application.yaml` | `.env` + GoFr Configs | | `Spring Data JPA` | Plain SQL via `c.SQL` (or `sqlc` / `gorm` if you want ORM-like) | | `@Scheduled` cron | `app.AddCronJob(...)` | | `@KafkaListener` | `app.Subscribe("topic", handler)` | | Spring Actuator | Built-in `/.well-known/health`, `/metrics` | | Micrometer | Built-in Prometheus metrics | | Spring Cloud Sleuth | Built-in OpenTelemetry tracing | | Resilience4j circuit breaker | Built-in via `app.AddHTTPService` | ## Hello world **Spring Boot:** ```java @RestController class HelloController { @GetMapping("/hello") public Map hello() { return Map.of("message", "Hello, world"); } } ``` **GoFr:** ```go package main import "gofr.dev/pkg/gofr" func main() { app := gofr.New() app.GET("/hello", func(c *gofr.Context) (any, error) { return "Hello, world", nil }) app.Run() } ``` ## Configuration: application.yaml → .env **Spring Boot:** ```yaml spring: datasource: url: jdbc:mysql://localhost:3306/mydb username: root server: port: 8080 ``` **GoFr (`configs/.env`):** ```bash HTTP_PORT=8080 DB_HOST=localhost DB_PORT=3306 DB_NAME=mydb DB_USER=root ``` 12-factor / environment-variable configuration is GoFr's default; per-environment files are a natural fit for Kubernetes ConfigMaps and Secrets. ## Dependency injection Go does not have class-based DI like Spring. The conventions are: - **Plain struct composition.** Most services pass dependencies through constructors. This is enough for the majority of cases. - **GoFr Container.** Datasources (SQL, Redis, Mongo, Pub/Sub clients) are provided by GoFr's Container and accessed via the request `Context`. - **Wire or Fx.** If you want a generated DI graph, both libraries integrate cleanly with GoFr. ## What you can drop - Spring Web Starter, Spring Data, Spring Security, Spring Cloud — replaced by GoFr's bundled equivalents. - Spring Boot Actuator, Micrometer, Sleuth — replaced by GoFr's built-in observability. - Resilience4j patterns on outbound HTTP — built into GoFr's service-to-service client. ## What you'll likely appreciate - **Startup time** measured in tens of milliseconds, not seconds. - **Memory footprint** of tens of MB, not hundreds. - **No JVM tuning, no GC pauses to chase.** Go's runtime is forgiving. - **Single static binary deploy.** Smaller container images. ## Common gotchas - **No annotation-driven anything.** Routing, validation, security, transactions — all explicit code, not annotations on classes. - **No JPA-style lazy loading.** SQL is explicit. If you depend on lazy-loaded relations, plan to do JOINs or eager-load explicitly. - **No `application-prod.yaml` / `application-staging.yaml` profile system.** GoFr loads `configs/.env` and then overlays `configs/..env` on top — so `APP_ENV=production` reads `configs/.env` then `configs/.production.env`. Note the dot prefix and `.env` suffix on the override file (not `.env.production`). - **No bean lifecycle.** Replace `@PostConstruct` / `@PreDestroy` with `OnStart` and graceful shutdown in `main`. - **Generics syntax is different from Java.** Go generics exist but are used sparingly; most code reads more like pre-generics Java. - **Dependency injection is wiring, not magic.** `@Autowired` field injection becomes constructor parameters, or `Wire` / `Fx` if you want a generated graph. See [Spring DI patterns and their Go equivalents](/docs/references/context). ## Estimated effort per service A medium Spring Boot service (50-100 endpoints, JPA entities, Kafka listeners) typically takes 1–2 engineering weeks for a Java team. The biggest time sink is decomposing JPA-heavy data access patterns into explicit Go SQL. ## Recommended adoption 1. Pick a Spring Boot service that's stateless and well-tested. 2. Rebuild it in GoFr; reuse its database, message broker, and downstream contracts. 3. Compare resource consumption and latency on identical workloads. {% faq %} {% faq-item question="What about Spring's @Transactional?" %} Go has explicit transaction handling — `tx, _ := c.SQL.Begin(); defer tx.Rollback(); ...; tx.Commit()`. There is no annotation-driven transaction boundary; the boundary is wherever your code says it is. {% /faq-item %} {% faq-item question="Can I keep using Kafka with the same topics?" %} Yes. GoFr's Pub/Sub subscriber connects to your existing Kafka brokers and topics. {% /faq-item %} {% /faq %} --- ## https://gofr.dev/migrate # Migrate to GoFr {% answer %} You don't have to migrate everything at once. The recommended path is: pick one new microservice, build it in GoFr, get a feel for the framework, then migrate older services as you touch them. GoFr deploys alongside your existing Gin / Fiber / Echo / Express / Flask / Spring Boot services with no special infrastructure. {% /answer %} {% callout title="Migrating with an AI assistant?" %} Hand [https://gofr.dev/AGENTS.md](https://gofr.dev/AGENTS.md) to your coding assistant (Claude Code, Cursor, Codex, Aider). It contains the framework conventions, routing/binding/datasource patterns, and per-framework cheat-sheets so the assistant can translate handlers without you re-explaining GoFr. {% /callout %} ## Choose your starting point ### From Go frameworks - [Migrate from Gin →](/migrate/from-gin) — handler, middleware, binding, and group translations. - [Migrate from Fiber →](/migrate/from-fiber) — `net/http` semantics, datasource and observability differences. - [Migrate from Echo →](/migrate/from-echo) — Migration guide for Go developers moving from Echo to GoFr. Handler signature, middleware, route groups, binding, and gradual adoption with side-by-side examples. - [Migrate from chi →](/migrate/from-chi) — Migration guide for Go developers moving from chi router to GoFr framework. Handler signature, middleware, route groups, URL params, and the router-vs-framework trade-off. ### From Node.js / TypeScript - [Migrate from Express (Node.js) →](/migrate/from-express) — JavaScript-to-Go mental model, async/await analogues. - [Migrate from NestJS →](/migrate/from-nestjs) — Migration guide for TypeScript developers moving from NestJS to GoFr. Controllers and decorators to handlers, modules to constructors, microservices to Pub/Sub. ### From Python - [Migrate from Flask →](/migrate/from-flask) — Pythonic patterns and their Go equivalents. - [Migrate from FastAPI →](/migrate/from-fastapi) — Migration guide for Python developers moving from FastAPI to GoFr. Async/await to goroutines, Pydantic to Go structs, automatic OpenAPI to built-in Swagger UI. - [Migrate from Django REST →](/migrate/from-django-rest) — Migration guide for Python developers moving from Django REST Framework to GoFr. ViewSets to AddRESTHandlers, ORM to SQL drivers, permissions to RBAC, settings.py to .env. ### From Java / .NET - [Migrate from Spring Boot (Java) →](/migrate/from-spring-boot) — DI, controllers, configuration, and observability mappings. - [Migrate from ASP.NET Core →](/migrate/from-aspnet-core) — Migration guide for C# developers moving from ASP.NET Core to GoFr. Controllers to handlers, DI container to constructors, appsettings.json to .env, OTLP exporter. ### From PHP / Ruby - [Migrate from Laravel (PHP) →](/migrate/from-laravel) — Migration guide for PHP developers moving from Laravel to GoFr. Controllers to handlers, Eloquent to SQL drivers, Artisan to GoFr CLI, queues to Pub/Sub. - [Migrate from Rails (Ruby) →](/migrate/from-rails) — Migration guide for Ruby developers moving from Rails to GoFr. Controllers to handlers, ActiveRecord to SQL, Active Job to Pub/Sub, Action Cable to WebSocket. ## Recommended adoption strategy 1. **Run a spike.** Build a small new service or internal tool in GoFr to learn the framework patterns. 2. **Establish your baseline configuration.** Decide how your team handles `.env` files, your OpenTelemetry collector endpoint, your Prometheus scrape config, and your log format. 3. **Migrate by attrition.** When you next touch an existing service for a feature or refactor, port it to GoFr in the same change. 4. **Use the same datastores.** GoFr's MySQL / Postgres / Mongo / Redis / Kafka clients connect to the same backends you already use; no data migration is required. 5. **Validate observability.** Confirm that traces, metrics, and logs from the migrated service appear in your existing observability stack with the same names and labels you expect. ## What stays the same - Your databases, message brokers, and caches. GoFr connects to existing infrastructure. - Your deployment platform (Kubernetes, ECS, Cloud Run, bare VM — all supported). - Your CI/CD pipeline. GoFr is a normal Go module; build and ship it the same way. - Your team's Go skills. GoFr is idiomatic Go. ## What changes - The handler signature: `func(*gofr.Context) (any, error)` replaces framework-specific types. - Configuration moves to environment variables / `.env` (12-factor). - Observability becomes default — you remove your manual OpenTelemetry / Prometheus wiring code. - Datasource access goes through `c.SQL`, `c.Redis`, `c.Mongo`, etc., instead of injected clients you manage. {% faq %} {% faq-item question="Can I migrate one route at a time?" %} Within a service: not easily, since GoFr owns the HTTP server. Across services: yes — keep your existing services running and migrate them one at a time. {% /faq-item %} {% faq-item question="Does my existing OpenTelemetry collector / Prometheus / log aggregator still work?" %} Yes. GoFr exports OTLP traces and Prometheus metrics; structured logs go to stdout in JSON. {% /faq-item %} {% /faq %} --- # Learn ## https://gofr.dev/learn # Learn GoFr {% answer %} There's no single right way to learn a framework — the best path depends on where you're starting from. Three tracks below sequence the existing GoFr documentation by background and goal: coming from another language, experienced Go developer new to GoFr, or building for production. {% /answer %} ## Track A — Coming from another language **Estimated time: 1–2 hours (read) + 30 min (run hello-world).** You know how to build microservices in Node, Python, Java, or another ecosystem, but you're new to Go. Read in this order: 1. [Quick Start: Build your first GoFr REST API](/docs/quick-start/introduction) — get something running. 2. **Pick your migration guide:** - [From Express (Node.js)](/migrate/from-express) - [From Flask (Python)](/migrate/from-flask) - [From Spring Boot (Java)](/migrate/from-spring-boot) 3. [Configuration](/docs/quick-start/configuration) — environment-driven config. 4. [Connecting MySQL](/docs/quick-start/connecting-mysql) and [Connecting Redis](/docs/quick-start/connecting-redis). 5. [Observability](/docs/quick-start/observability) — see traces, metrics, and structured logs. 6. [GoFr Context Reference](/docs/references/context) — the one core abstraction. ## Track B — Go developer new to GoFr **Estimated time: 30 min (read) + 30 min (run hello-world).** You've written Go before — maybe with `net/http`, Gin, Fiber, Echo, or Chi. 1. [Why GoFr?](/why-gofr) — the philosophy and what's actually in the box. 2. [GoFr vs your current framework](/comparison) — head-to-head feature comparison. 3. [Quick Start: Build your first GoFr REST API](/docs/quick-start/introduction). 4. [Auto CRUD REST handlers](/docs/quick-start/add-rest-handlers). 5. [Custom middleware](/docs/advanced-guide/middlewares). 6. [Service-to-service HTTP](/docs/advanced-guide/http-communication). 7. [GoFr Context Reference](/docs/references/context) and [Configuration Reference](/docs/references/configs). ## Track C — Building for production **Estimated time: 2–3 hours, dipping in as you encounter each concern in real services.** 1. [Observability](/docs/quick-start/observability). 2. [Custom OpenTelemetry spans](/docs/advanced-guide/custom-spans-in-tracing). 3. [Custom Prometheus metrics](/docs/advanced-guide/publishing-custom-metrics). 4. [Service health monitoring](/docs/advanced-guide/monitoring-service-health). 5. [Authentication](/docs/advanced-guide/authentication) and [RBAC](/docs/advanced-guide/rbac). 6. [Circuit breaker support](/docs/advanced-guide/circuit-breaker) and [HTTP communication](/docs/advanced-guide/http-communication). 7. [Database migrations](/docs/advanced-guide/handling-data-migrations). 8. [Startup hooks](/docs/advanced-guide/startup-hooks). 9. [Remote log level change](/docs/advanced-guide/remote-log-level-change). 10. [Profiling (pprof)](/docs/advanced-guide/debugging). 11. [Testing](/docs/references/testing). ## Reference materials - [Full documentation index](/docs) - [Examples repository](https://github.com/gofr-dev/gofr/tree/main/examples) - [Showcase](/showcase) — companies and engineers running GoFr in production. - [Changelog](/changelog) — release notes and version history. --- # FAQ ## https://gofr.dev/faq # Frequently Asked Questions {% answer %} GoFr is a free, Apache 2.0–licensed, opinionated Go framework for production microservices. It includes built-in HTTP, gRPC, GraphQL, WebSockets, Pub/Sub, observability (OpenTelemetry traces, Prometheus metrics, structured logs), 15+ datasource clients, migrations, cron, RBAC, and a service-to-service HTTP client with circuit breakers. {% /answer %} ## Pricing & licensing {% faq %} {% faq-item question="Is GoFr free to use?" %} Yes. GoFr is open source and licensed under [Apache 2.0](https://github.com/gofr-dev/gofr/blob/main/LICENSE). There is no paid tier, no commercial license, and no usage limits. {% /faq-item %} {% faq-item question="Is GoFr open source?" %} Yes. The full source is at [github.com/gofr-dev/gofr](https://github.com/gofr-dev/gofr) under the Apache 2.0 license. {% /faq-item %} {% faq-item question="Who maintains GoFr?" %} GoFr is developed in the open by the GoFr team and a community of contributors. See the [team page](/team) for the current maintainers, and [github.com/gofr-dev/gofr](https://github.com/gofr-dev/gofr) for ways to get involved. {% /faq-item %} {% /faq %} ## Features and protocols {% faq %} {% faq-item question="Does GoFr support gRPC?" %} Yes. Built-in gRPC server with unary and stream interceptors, custom server options, panic recovery, and integrated observability. See [Writing gRPC Servers and Clients](/docs/advanced-guide/grpc). {% /faq-item %} {% faq-item question="Does GoFr support GraphQL?" %} Yes. GoFr supports schema-first GraphQL with queries, mutations, and an interactive playground. See [GraphQL in Go with GoFr](/docs/advanced-guide/graphql). {% /faq-item %} {% faq-item question="Does GoFr support WebSockets?" %} Yes — both server and client. Auto-reconnect, custom upgrader, and integrated observability. See [WebSockets in Go with GoFr](/docs/advanced-guide/websocket). {% /faq-item %} {% faq-item question="Does GoFr support Pub/Sub?" %} Yes. Built-in support for Apache Kafka, NATS JetStream, Google Pub/Sub, MQTT, AWS SQS, and Azure Event Hub through one unified `Subscribe` / `Publish` API. {% /faq-item %} {% faq-item question="Does GoFr include cron jobs?" %} Yes. Schedule recurring tasks with 5- or 6-part cron expressions. Each job execution gets an automatic OpenTelemetry span and metrics. {% /faq-item %} {% /faq %} ## Datasources {% faq %} {% faq-item question="Which databases does GoFr support?" %} SQL: MySQL, PostgreSQL, Oracle, SQLite, SQL Server. NoSQL: MongoDB, Redis, Cassandra, ScyllaDB, Couchbase, DGraph, SurrealDB, ArangoDB. Search/Analytics: Elasticsearch, Solr, ClickHouse, OpenTSDB, InfluxDB. All ship with built-in observability. {% /faq-item %} {% faq-item question="Does GoFr have an ORM?" %} GoFr's SQL client provides connection pooling, observability, and parameter binding, not an ORM. Many GoFr users pair it with [`sqlc`](https://sqlc.dev/) for type-safe queries; some use `gorm`. Both work fine inside GoFr handlers. {% /faq-item %} {% faq-item question="Can I plug in a custom database driver?" %} Yes. Implement the GoFr datasource interface to inject your own backend with full observability. {% /faq-item %} {% /faq %} ## Observability {% faq %} {% faq-item question="Does GoFr support OpenTelemetry?" %} Yes. OpenTelemetry tracing is built in with OTLP and Jaeger exporters, configurable sampling, trace-context propagation, and automatic span correlation across HTTP, gRPC, datasource calls, cron jobs, and Pub/Sub. {% /faq-item %} {% faq-item question="Does GoFr support Prometheus metrics?" %} Yes. Built-in Prometheus metrics for HTTP requests, gRPC, cron jobs, GraphQL operations, and your own custom counters/histograms/gauges. The `/metrics` endpoint is auto-exposed. {% /faq-item %} {% faq-item question="Can I change log levels in production without restart?" %} Yes. Point `REMOTE_LOG_URL` at an HTTP endpoint that returns the desired log level; GoFr's logger polls that URL and adjusts the in-process level on the fly (poll interval via `REMOTE_LOG_FETCH_INTERVAL`). The admin endpoint is one *you* operate — GoFr does not serve it on the service itself. See [Remote Log Level Change](/docs/advanced-guide/remote-log-level-change). {% /faq-item %} {% /faq %} ## Comparing to other frameworks {% faq %} {% faq-item question="How is GoFr different from net/http?" %} `net/http` is the standard library — it gives you HTTP and nothing else. GoFr is built on `net/http` and adds opinionated production layers: routing helpers, observability, datasource clients, gRPC, GraphQL, WebSockets, Pub/Sub, migrations, RBAC, and a resilient service-to-service HTTP client. {% /faq-item %} {% faq-item question="How does GoFr compare to Gin, Fiber, Echo, and Chi?" %} Gin, Fiber, Echo, and Chi are excellent minimal HTTP routers. GoFr has a wider scope — alongside HTTP it also includes observability, datasources, gRPC, GraphQL, Pub/Sub, migrations, and resilience patterns. See [GoFr vs Gin / Fiber / Echo / Chi](/comparison) for a side-by-side; both approaches have their place. {% /faq-item %} {% faq-item question="Can I migrate from Gin / Fiber / Express / Flask / Spring Boot to GoFr?" %} Yes. Migration guides with concrete code translations are at [/migrate](/migrate). {% /faq-item %} {% /faq %} ## Deployment and operations {% faq %} {% faq-item question="Does GoFr work on Kubernetes?" %} Yes. GoFr is designed for Kubernetes deployment. Health endpoints (`/.well-known/health`, `/.well-known/alive`) are auto-exposed for liveness and readiness probes. Logs go to stdout in JSON; metrics expose at `/metrics` for Prometheus scraping. {% /faq-item %} {% faq-item question="Does GoFr support graceful shutdown?" %} Yes. `app.Shutdown(ctx)` closes the HTTP server, gRPC server, datasource connections, and loggers cleanly when the process receives a termination signal. {% /faq-item %} {% faq-item question="Can I run setup logic before the server starts?" %} Yes. Register a function with `app.OnStart` to seed databases, warm caches, or perform initialization synchronously before traffic begins. {% /faq-item %} {% /faq %} ## Testing {% faq %} {% faq-item question="How do I test GoFr applications?" %} GoFr provides built-in mocks for handlers, datasources (SQL, Redis, Mongo), HTTP services, and Pub/Sub. See [Testing GoFr Applications in Go](/docs/references/testing). {% /faq-item %} {% /faq %} ---