Developer guide¶
Receptor is an open source project that lives at https://github.com/ansible/receptor
Testing¶
Pull requests must pass a suite of integration tests before being merged into devel
.
make test
will run the full test suite locally. Some of the tests require access to a Kubernetes cluster; these tests will load in the kubeconfig file located at $HOME/.kube/config
. One simple way to make these tests work is to start minikube locally before running make test
. See https://minikube.sigs.k8s.io/docs/start/ for more information about minikube.
To skip tests that depend on Kubernetes, set environment variable export SKIP_KUBE=1
.
Additionally, all code must pass a suite of Go linters. There is a pre-commit yaml file in the receptor repository that points to the linter suite. It is best practice to install the pre-commit yaml so that the linters run locally on each commit.
cd $HOME
go get github.com/golangci/golangci-lint/cmd/golangci-lint
pip install pre-commit
cd receptor
pre-commit install
See https://pre-commit.com/ and https://golangci-lint.run/ for more details on installing and using these tools.
We are using gomock to generate mocks for our unit tests. The mocks are living inside of a package under the real implementation, prefixed by mock_. An example is the package mock_workceptor under pkg/workceptor.
In order to genenerate a mock for a particular file, you can run:
For example, to create/update mocks for Workceptor, we can run:
Source code¶
The next couple of sections are aimed to orient developers to the receptor codebase and provide a starting point for understanding how receptor works.
Parsing receptor.conf¶
Let’s see how items in the config file are mapped to Golang internals.
As an example, in tcp.go
cmdline.RegisterConfigTypeForApp("receptor-backends",
"tcp-peer", "Make an outbound backend connection to a TCP peer", TCPDialerCfg{}, cmdline.Section(backendSection))
“tcp-peer” is a top-level key (action item) in receptor.conf
- tcp-peer:
address: localhost:2222
RegisterConfigTypeForApp
tells the cmdline parser that “tcp-peer” is mapped to the TCPDialerCfg{}
structure.
main()
in receptor.go is the entry point for a running receptor process.
In receptor.go (modified for clarity):
cl.ParseAndRun("receptor.conf", []string{"Init", "Prepare", "Run"})
A receptor config file has many action items, such as “node”, “work-command”, and “tcp-peer”. ParseAndRun is how each of these items are instantiated when receptor starts.
Specifically, ParseAndRun will run the Init, Prepare, and Run methods associated with each action item.
Here is the Prepare method for TCPDialerCfg
. By the time this code executes, the cfg structure has already been populated with the data provided in the config file.
// Prepare verifies the parameters are correct.
func (cfg TCPDialerCfg) Prepare() error {
if cfg.Cost <= 0.0 {
return fmt.Errorf("connection cost must be positive")
}
return nil
}
This simply does a check to make sure the provided Cost is valid.
The Run method for the TCPDialerCfg
object:
// Run runs the action.
func (cfg TCPDialerCfg) Run() error {
logger.Debug("Running TCP peer connection %s\n", cfg.Address)
host, _, err := net.SplitHostPort(cfg.Address)
if err != nil {
return err
}
tlscfg, err := netceptor.MainInstance.GetClientTLSConfig(cfg.TLS, host, "dns")
if err != nil {
return err
}
b, err := NewTCPDialer(cfg.Address, cfg.Redial, tlscfg)
if err != nil {
logger.Error("Error creating peer %s: %s\n", cfg.Address, err)
return err
}
err = netceptor.MainInstance.AddBackend(b, cfg.Cost, nil)
if err != nil {
return err
}
return nil
}
This gets a new TCP dialer object and passes it to the netceptor AddBackend method, so that it can be processed further. AddBackend will start proper Go routines that periodically dial the address defined in the TCP dialer structure, which will lead to a proper TCP connection to another receptor node.
In general, when studying how the start up process works in receptor, take a look at the Init, Prepare, and Run methods throughout the code, as these are the entry points to running those specific components of receptor.
Ping¶
Studying how pings work in receptor will provide a useful glimpse into the internal workings of netceptor – the main component of receptor that handles connections and data traffic over the mesh.
receptorctl --socket /tmp/foo.sock ping bar
The control-service on foo will receive this command and subsequently call the following,
ping.go::ping
func ping(nc *netceptor.Netceptor, target string, hopsToLive byte) (time.Duration, string, error) {
pc, err := nc.ListenPacket("")
target
is the target node, “bar” in this case.
nc.ListenPacket("")
starts a new ephemeral service and returns a PacketConn
object. This is a datagram connection that has a WriteTo() and ReadFrom() method for sending and receiving data to other nodes on the mesh.
packetconn.go::ListenPacket
pc := &PacketConn{
s: s,
localService: service,
recvChan: make(chan *messageData),
advertise: false,
adTags: nil,
connType: ConnTypeDatagram,
hopsToLive: s.maxForwardingHops,
}
s.listenerRegistry[service] = pc
return pc, nil
s
is the main netceptor object, and a reference to the PacketConn object is stored in netceptor’s listenerRegistry
map.
ping.go::ping
_, err = pc.WriteTo([]byte{}, nc.NewAddr(target, "ping"))
Sends an empty message to the address “bar:ping” on the mesh. Recall that nodes are analogous to DNS names, and services are like port numbers.
ToWrite
calls sendMessageWithHopsToLive
netceptor.go::sendMessageWithHopsToLive
md := &messageData{
FromNode: s.nodeID,
FromService: fromService,
ToNode: toNode,
ToService: toService,
HopsToLive: hopsToLive,
Data: data,
}
return s.handleMessageData(md)
Here the message is constructed with essential information such as the source node and service, and the destination node and service. The Data field contains the actual message, which is empty in this case.
handleMessageData
calls forwardMessage
with the md
object.
netceptor.go::forwardMessage
nextHop, ok := s.routingTable[md.ToNode]
The current node might not be directly connected to the target node, and thus netceptor needs to determine what is the next hop to pass the data to. s.routingTable
is a map where the key is a destination (“bar”), and the value is the next hop along the path to that node. In a simple two-node setup with foo and bar, s.routingTable["bar"] == "bar"
.
netceptor.go::forwardMessage
c, ok := s.connections[nextHop]
c.WriteChan <- message
c
here is a ConnInfo
object, which interacts with the various backend connections (UDP, TCP, websockets).
WriteChan
is a golang channel. Channels allows communication between separate threads (Go routines) running in the application. When foo and bar had first started, they established a backend connection. Each node runs the netceptor runProtocol go routine, which in turn starts a protoWriter go routine.
netceptor.go::protoWriter
case message, more := <-ci.WriteChan:
err := sess.Send(message)
So before the “ping” command was issued, this protoWriter Go routine was already running and waiting to read messages from WriteChan.
sess
is a BackendSession object. BackendSession is an abstraction over the various available backends. If foo and bar are connected via TCP, then sess.Send(message)
will pass along data to the already established TCP session.
tcp.go::Send
func (ns *TCPSession) Send(data []byte) error {
buf := ns.framer.SendData(data)
n, err := ns.conn.Write(buf)
ns.conn
is net.Conn object, which is part of the Golang standard library.
At this point the message has left the node via a backend connection, where it will be received by bar.
Let’s review the code from bar’s perspective and how it handles the incoming message that is targeting its “ping” service.
On the receiving side, the data will first be read here
tcp.go::Recv
n, err := ns.conn.Read(buf)
ns.framer.RecvData(buf[:n])
Recv was called in protoReader Go routine, similar to the protoWriter when the message sent from foo.
Note that ns.conn.Read(buf)
might not contain the full message, so the data is buffered until the messageReady()
returns true. The size of the message is tagged in the message itself, so when Recv has received N bytes, and the message is N bytes, Recv will return.
netceptor.go::protoReader
buf, err := sess.Recv(1 * time.Second)
ci.ReadChan <- buf
The data is passed to a ReadChan channel.
netceptor.go::runProtocol
case data := <-ci.ReadChan:
message, err := s.translateDataToMessage(data)
err = s.handleMessageData(message)
The data is read from the channel, and deserialized into an actual message format in translateDataToMessage
.
netceptor.go::handleMessageData
if md.ToNode == s.nodeID {
handled, err := s.dispatchReservedService(md)
This checks whether the destination node indicated in the message is the current node. If so, the message can be dispatched to the service.
“ping” is a reserved service in the netceptor instance.
s.reservedServices = map[string]func(*messageData) error{
"ping": s.handlePing,
}
netceptor.go::handlePing
func (s *Netceptor) handlePing(md *messageData) error {
return s.sendMessage("ping", md.FromNode, md.FromService, []byte{})
}
This is the ping reply handler. It sends an empty message to the FromNode (foo).
The FromService here is not “ping”, but rather the ephemeral service that was created from ListenPacket("")
in ping.go on foo.
With trace
enabled in the receptor configuration, the following log statements show the reply from bar,
TRACE --- Received data length 0 from foo:h73opPEh to bar:ping via foo
TRACE --- Sending data length 0 from bar:ping to foo:h73opPEh
So the ephemeral service on foo is called h73opPEh (randomly generated string).
From here, the message from bar will passed along in a very similar fashion as the original ping message sent from foo.
Back on node foo, the message is received receive the message where it is finally handled in ping.go
ping.go::ping
_, addr, err := pc.ReadFrom(buf)
case replyChan <- fromNode:
case remote := <-replyChan:
return time.Since(startTime), remote, nil
The data is read from the PacketConn object, written to a channel, where it is read later by the ping() function, and ping() returns with the roundtrip delay, time.Since(startTime)
.