Building A Game Application Server (part 3)

2011-06-06 by Cheetah

So now we’ve done a basic design overview, and we’ve gone over our interthread communication. It’s time to dig into some net code. As a bit of forewarning, this net code only works on UNIX variants; it should compile cleanly on Linux or Solaris. Windows net code is actually quite similar; replace most of the #include’s with “#include <winsock.h>” and then add the Windows-specific bootstrapping.

If there is a demand for it, I’ll do the port myself at some point. The Windows threads are a different matter; my understanding is that those operate somewhat differently from POSIX threads, but it is no doubt fairly easy to do. If you are unfamiliar with network programming, I highly recommend Beej’s Guide. One of the few programming books I’ve ever bought is his; you should buy his book too! This guide assumes you know what a socket is, and can handle words like “descriptor” and “port”.

Anyway – the overall idea is that we will create a “server”, which is specifically a socket belonging to a port on the machine with two queues. One input, one output. A single thread will service each “server”, passing data from the socket to our game and then passing data back out again. Let’s define our core structures in a header file, such as:

typedef struct __connection__ {
	int	descr;				/* The connection descriptor (socket)     */
	char	connected;		/* Bool; true if still connected.         */
	time_t	lastinput;		/* Last time we received data.            */
	time_t	lastoutput;		/* Last time we sent data.                */
	char	input[CON_BUF];	/* The input buffer for connection.       */
	int	input_i;			/* The iterator for input 'i'.            */
	char*	output;			/* The output buffer, dynamic.            */
	int	output_size;		/* How big the output buffer is.          */
	int	output_i;			/* How far along we've gone in the buffer */
	struct	sockaddr_in	addy;		/* The con address.      */
	struct	__connection__* next;	/* Linked list variable  */
} CON;

typedef struct __server__ {
	int		sock;			/* The server's main socket    */
	int		numconnected;	/* Number of people connected  */
	unsigned short	port;	/* Port number                 */
	CON*		head;		/* Head of the connection list */
	int		status;			/* Status of the server.       */
	pthread_t	conthread;	/* Connection thread data.     */
	QUEUE*		input;		/* Input TO thread queue       */
	QUEUE*		output;		/* Output FROM thread queue    */

typedef struct __sendmulti__{
	int	numdescr;			/* Number of descriptors to send to   */
	int*	descr;			/* The list of descriptors to send    */
	char*	message;		/* Message to send                    */
	int	message_size;		/* The size of the message being sent */

A “CON” is a single connection. We will keep them in a linked list, as we will be cycling over them. This, actually, is not the most efficient data structure for the task… but it’s the easiest! We can always improve it later. The CON is mostly internal in use.

A “SERVER” is the representation of a socket and it’s queues. It is the structure used by the rest of the program to receive or transmit over the socket.

And finally, the “SENDMULTI” structure is a way to, essentially, multicast and send to a set of descriptors.

Our thread-safe queues (SERVER.input and SERVER.output) will use the COMMAND structure to pass data back and forth. We will use a set of #define’s for our COMMAND.command field:

#define	NETCMD_SHUTDOWN		0	/* Shut down the server         */
#define	NETCMD_XMIT			1	/* Transmission command         */
#define NETCMD_RECV			1	/* Alias for transmission.      */
#define NETCMD_SEND			1	/* Alias for transmission.      */
#define NETCMD_CLOSE		2	/* Command to close connection  */
#define NETCMD_SENDALL		3	/* Send a packet to everyone.   */
#define NETCMD_SENDMULTI	4	/* Send a packet to multiple    */
#define	NETCMD_ACCEPT		5	/* Server accepted a connection */

The “arg” of the command will be the descriptor of the impacted connection in all cases where it makes sense (NETCMD_XMIT, NETCMD_CLOSE, NETCMD_ACCEPT). The queue it came out of will determine what kind of command it is; if we popped the request out of the output queue, then a NETCMD_CLOSE is informing us a connection dropped. If we push a NETCMD_CLOSE to the input queue, we’re letting the server know we want to close that connection.

Hopefully this is reasonably simple and makes sense so far. We will also define some configuration defines and some functions. See the entire header file here. It’s important to note that the entire library is self contained… the “server_init” function will start the thread and everything. Very fire and forget; users of this API do not have to know the guts of the system or even know how to spin up the thread. On to some code.

Our initialization function is pretty simple. Our goal is to hide threads from the outside process as much as possible. Here’s the initialization routine:

 * SERVER*	server_init(unsigned short port);
 * This initializes the server.  It returns a server structure, or NULL on
 * failure.  Pass the port to open.
SERVER*	server_init(unsigned short port){
	SERVER*	serv;


		return (SERVER*) NULL;


		return (SERVER*) NULL;

		return (SERVER*) NULL;

		return (SERVER*) NULL;

	return (SERVER*) serv;

Basically, we alloc up a SERVER struct, we create the input and output queues, and then we spin a thread using “con_main” for our connection processor.

con_main is a pretty hefty function. It basically loops through, waits on the connection, and processes I/O from the queues. We keep a linked list of connections as about half the time we will be linearly iterating over them. Unfortunately, sometimes we have to multiply iterate over the linked list (for example, if you disconnect someone from the list while processing I/O) but I’m not sure if the overhead is worth using a different data structure.

First, we iterate over the connection link list and process pending input and output requests. Then, we iterate over the server’s input queue to process new inbound commands. It might be more efficient to have two threads per server (one to handle I/O between descriptors and one to handle I/O with the rest of the program), but it’s also a lot more complicated and the critical section collisions may eat a lot of efficiency gains.

Also, this method is kind of heavy in memory allocation calls. It’s not that it’s memory heavy, it’s that it uses malloc() and realloc() a little too frequently. For example, here:

/* Send data to a client. */
				case NETCMD_XMIT:

						printf("Tried to send to a non-existant descriptor.\n");

When we sent a message over a connection, we append the message to their output buffer by reallocing the buffer. It would probably be better to use a better buffer structure … but unless you’re receiving an incredibly high volume of throughput, it may not matter.

Some notes; we use a terminator character (defined with #define CON_EOM) to signify the end of a message. The server doesn’t push data to its output queue until the client sends a CON_EOM. In the case of a MUD/MUCK, “\n” is our CON_EOM because we don’t want command fragments. In other cases, you may want some binary code or something else to be the EOM operator. This code also drops the connection if the user overruns the buffer; this may not be “appropriate” for your use case, but the behavior is relatively easy to change here:

/* Drop connection if it's overrun the buffer. */
				if(ptr->input_i >= (CON_BUF-1)){

					printf("Dropped connection on receive!\n");

There is a lot of book-keeping message and network code which is rather verbose and kind of boiler plate. Feel free to check out the source file here. ‘con_main’ is definitely the most interesting piece!

And then we’ll go on to the next article …throwing Python into the mix!