Synchronous Communication in GoLang

Synchronous Communication in GoLang

Michał Groele - Senior Golang Developer
3 minutes read

Our previous article discussed the topic of asynchronous communication using queues. The ideal microservice architecture would be based solely on queues. Still, different situations require different solutions; in projects, we often use synchronous communication, either as a client or as a server sharing some data for a website over which we do not have complete control.

Another case would be data synchronization or data validation. If we do not trust our queue handling system, we can verify (for instance, once a day) whether the amount of data we have received matches the amount of data on the source website. However, the most common reason for using synchronous communication is our front-end layer, which is easier to implement.

Gin

Gin is a Web Framework which allows us to implement the API in our Go project easily. It also enables the implementation of endpoints that return HTML views, but due to the scope of this article, we will not discuss this part further.

There is a built-in net/HTTP package in Go, which is quite convenient and easy to use. But since we do not need to reinvent the wheel, it is worth taking advantage of a ready-made package, containing functions you will probably need to use. One such package is Gin; It features extensive routing, rendering, and middleware support.

Response formats

A relatively common format returned by the API is JSON, which also supports Gin. It allows for an automatically built response in JSON format based on the structure. Other supported formats that can also be used in the API are XML and YAML.

The process of creating a response, supporting all three formats, may look like the example below.


type User struct {
    ID int `json:"id" xml:"Id" yaml:"id"`
    FirstName string `json:"first_name" xml:"FirstName" yaml:"first_name"`
    EmailAddress string `json:"email_address" xml:"EmailAddress" yaml:"email_address"`
}

func main() {
    user: = User {
        ID: 1,
        FirstName: "John",
        EmailAddress: "john.doe@polcode.net",
    }

    router: = gin.Default()
    router.GET("/user", func(c * gin.Context) {
        switch c.ContentType() {
            case "application/xml":
                c.XML(http.StatusOK, user)
            case "application/yaml":
                c.YAML(http.StatusOK, user)
            default:
                c.JSON(http.StatusOK, user)
        }
    })
}


Adding HTTP API endpoints to our code should not be too much of a problem.

Middleware

At Gin, we also have a well-designed option to extend our server with additional functions, such as authorization, logging, error handling, etc. Some of these functions are already built-in as middleware, such as gin.Recovery (), which is used for error handling and returns an internal server error 500. We can add our own middleware very simply – we just need to keep the interface compatible. We can use middleware not only on the entire server, but also on individual endpoints or endpoint groups.

Example of logger implementation:

func main() {
	// Creates new HTTP server
	r := gin.New()

	// Applies gin.Logger as middleware for whole server
	r.Use(gin.Logger())

	// Applies our custom middleware on specific endpoint
	r.GET("/user", MyCustomLogger(), userEndpoint)

	r.Run(":8080")
}


As you can see, the middleware technique can be very beneficial in this case.

Tools

In the case of an API written in Gin, the best tool for testing the code (apart from the integration tests using the Go client) will be the tool built into GoLand for sending requests to the API. Thanks to this, we can provide a database of sample API queries directly in the project files in the repository. Another option for fulfilling this role would be Postman or any other tool which allows you to send requests to the desired address. Postman will allow us to test endpoints in both JSON and XML.

gRPC API

The gRPC API in Go can be implemented via Protocol Buffers, already mentioned in the previous article. The presented implementation showed the definition of objects, but we need a service definition, an example of which can be seen below.

message Account {
    string id = 1;
    string name = 2;
    string email = 3;
    string phone_number = 4;
    Type type = 10;

    enum Type {
        TYPE_UNSPECIFIED = 0;
        TYPE_WEB_ACCOUNT = 1;
        TYPE_MOBILE_ACCOUNT = 2;
    }
}

service AccountServiceSearch {
    rpc FindBy(AccountSearchRequest) returns(Account);
}

message AccountSearchRequest {
    string id = 1;
    String name = 2;
}

The above code defines a solution for searching accounts in our API using an ID or a username. The process of generating the finished code with an interface to be implemented in Go is possible using the same command we used in the article on asynchronous communication.

"protoc -I /usr/include/google/protobuf -I . –go_out=plugins=grpc:output ./our_service.proto"

Ways of communication

gRPC, just like the regular HTTP API, has two modes for sending and receiving messages – Unary and Stream. Unary is a simple complete request which also expects the entire response. Stream consists of sending the request in several parts. It can be used, for example, to send or download a file.

Both types can be mixed. The client can send a Unary request and expect a Stream, and send a Stream and expect Unary.

Unary is the default communication mode. It does not need any additional information in the Protobuf definition, while Stream consists of adding the Stream keyword before the attribute which will be passed this way.

An example of implementing a data streaming server:

rpc Download(DownloadRequest) returns (stream DownloadResponse);

Implementation example for a data streaming client:

rpc Upload(stream UploadRequest) returns (UploadResponse);

The most common example of Stream communication is downloading and sending files. Implementing such a server and client in Go is also very simple; everything can be built with loops.

Below you can see the excerpts from the implementation of sending and receiving files as a stream.

stream := proto.UploadRequest

for {
	log.Info("upload: waiting to receive more data")

	request, streamErr := stream.Recv()
	if errors.Is(streamErr, io.EOF) {
		log.WithFields(log.Fields{"error": streamErr}).Info("upload: all data received")

		break
	}

	if streamErr != nil {
		return errors.Wrap(streamErr, "upload: cannot receive chunk data")
	}

	chunk := request.GetFile().GetContent()

	if _, writeErr := fileData.Write(chunk); writeErr != nil {
		return errors.Wrap(writeErr, "cannot write chunk data")
	}
}


The split ([]byte, int) function in our case is a simple function which divides the data that we plan to send into smaller parts, on which we loop; in it, we send each part separately as a Stream from the server to the client.


Receiving a Stream as a server is listening for information. It can be done, for example, on a for loop without establishing any conditions. The loop can be closed when we get the confirmation that all the data is in the request sent to the server.

Interceptors

In gRPC, we can extend our server with additional layers, for instance for authorization and log creation. Similar to the previously described Gin, it is possible with the help of Interceptors.

In the case of gRPC, we must remember that there are separate interceptors for Unary and Stream requests. We can find them in the grpc package, under the names UnaryServerInterceptor and StreamServerInterceptor names.

An example of the implementation of Interceptors may look like the one presented below.

import (
	grpcmiddleware "github.com/grpc-ecosystem/go-grpc-middleware"
	"google.golang.org/grpc"
)

func ChainUnaryWithLogging(interceptors ...grpc.UnaryServerInterceptor) grpc.UnaryServerInterceptor {
	i := make([]basegrpc.UnaryServerInterceptor, 0)
	i = append(i, WithRequestLogging(func(c *RequestLoggingConfig) {
		c.LogRequest = log.IsLevelEnabled(log.DebugLevel)
	}))

	return grpcmiddleware.ChainUnaryServer(i...)
}

func NewServer() (*grpc.Server, error) {
	var serverOptions []basegrpc.ServerOption
	serverOptions = append(serverOptions, grpc.UnaryInterceptor(ChainUnaryWithLogging()))

	return grpc.NewServer(serverOptions...), nil
}


Tools

Tools that allow for easy testing requests to our API in gRPC are already built into the GoLand IDE. If it is a tool we work with on a daily basis, we can build a database of sample requests to our API and put them in the project repository without difficulty. Another interesting tool is BloomRPC, which allows us to import Protobuf files and query them to our API. Thanks to the built-in header support, we can also easily test APIs protected by, for example, JWT or SSL keys. Another possible tool is the aforementioned Postman, but support for gRPC is in the beta phase at the time of writing this article, so not all of its functions are available for the API in gRPC.

Serverless

Recently, the creation of servers in the Serverless architecture is gaining in popularity. It is based on developing functions, not entire applications. They do not rely on being constantly connected to a server – they activate only upon receiving a request. This allows us to save some of the resources that we use. However, it is always worth doing some calculations on the number of requests we want to process, as a scalable server with a constantly running application may turn out to be more profitable than a serverless solution. If you have a common function across a few services without many dependencies, and the data that’s being processed within it is not excessive, nor the communication with it requires a vast number of requests every single minute, you can confidently consider a serverless implementation.

Some of the popular ways of carrying out a serverless solution are combining AWS Lambda and AWS API Gateway as well as activating our function after receiving a message to the SQS queue. The use of Go, Lambda and various combinations of AWS services themselves, is out of the scope of this article.

Implementing a function activated by Lambda in Go is rather straightforward. We need to load a repository containing AWS Go SDK Lambda and create a main() function, in which we activate a handler for upcoming requests; In our case, that would be events.APIGatewayProxyRequest. We then return events.APIGatewayProxyRequest and error. That is all; the rest depends on what we want our function to implement. In addition, we need to configure our AWS user profile, create Lambda and API Gateway and connect them. Precise instructions may be found in AWS documentation. Of course, the configuration itself is also possible in Terraform or through the official AWS tools, such as AWS CLI and AWS SAM; this, however, requires us to compress our application and upload it to Lambda.

An example of a function in Go prepared for Lambda and API Gateway may look like the one presented below.

package main

import (
	"encoding/json"
	"net/http"

	"github.com/aws/aws-lambda-go/events"
	"github.com/aws/aws-lambda-go/lambda"
	log "github.com/sirupsen/logrus"
)

type company struct {
	ID      string
	Name    string
	Website string
}

func showCompany(req events.APIGatewayProxyRequest) (events.APIGatewayProxyResponse, error) {
	polcode := company{
		ID:      "1",
		Name:    "Polcode Sp. z o.o.",
		Website: "https://polcode.com",
	}

	body, err := json.Marshal(polcode)
	if err != nil {
		log.WithFields(log.Fields{
			"err": err,
		}).Error("could not marshal company to JSON")

		return events.APIGatewayProxyResponse{
			StatusCode: http.StatusInternalServerError,
			Body:       http.StatusText(http.StatusInternalServerError),
		}, nil
	}

	return events.APIGatewayProxyResponse{
		StatusCode: http.StatusOK,
		Body:       string(body),
	}, nil
}

func main() {
	lambda.Start(showCompany)
}


Summary

As you can see, Go has a large number of possibilities when it comes to the implementation of synchronous communication. There is the most popular API in the JSON format as well as the gRPC service using Protobuf, which allows us to have a standard verification of the request format and to easily implement versioning. There are also more modern serverless solutions, which can be carried out in the two previously mentioned ways.

However, we must remember to consider whether synchronous communication is actually a better solution. Its main disadvantage is the possibility of being unavailable during a malfunction, which could cause a failure in the services which use it as a client. In the case of asynchronous communication, the answer does not necessarily appear a millisecond after the query, but any failures of individual services should not make the entire group unavailable.

On-demand webinar: Moving Forward From Legacy Systems

We’ll walk you through how to think about an upgrade, refactor, or migration project to your codebase. By the end of this webinar, you’ll have a step-by-step plan to move away from the legacy system.

Watch recording
moving forward from legacy systems - webinar

Latest blog posts

See more

Ready to talk about your project?

1.

Tell us more

Fill out a quick form describing your needs. You can always add details later on and we’ll reply within a day!

2.

Strategic Planning

We go through recommended tools, technologies and frameworks that best fit the challenges you face.

3.

Workshop Kickoff

Once we arrange the formalities, you can meet your Polcode team members and we’ll begin developing your next project.