Raycaster engine

Here's the example code and here's the final result, use arrow keys to move around. As of February 2021 it only works on Chrome and you need to go to chrome://flags and enable the experimental webassembly feature:

Intro and terminology

This is the level as seen from above. The player's position is the red P dot, the direction he's looking in is the blue line and the black square is an obstacle, like a box in the level. The direction the player is looking in is expressed as an absolute angle in radians. When the player is looking straight "up" in relations to the map that angle is zero. When he's looking to the right that angle is pi / 2 e.t.c.

ray1
fig 1

Before we can render the world from the player's perspective we need to define a Field of View or FOV. The FOV will tell us how much of the level will be visible to the player.
The FOV extends equaly on both sides of the blue line that represents the direction the player is facing so I will refer to this line as the median line. It will become important later for calculating the position of objects in the world. Note also that even though the median looks like a ray we don't cast a ray directly along it, it's just used for measuring.

ray2
fig 2 The FOV is represented by the pale green triangle stretching out from the player's position. Inside the FOV we cast a bunch of green rays in order to detect objects that reside inside the FOV. Parts of the black rectangle is inside and is hit by five rays so it will be rendered on screen.

The rays are then cast out in a fan like structure as seen from above the player with the amount of rays cast equaling the horizontal resolution that we're rendering in. Because the rays don't overlap and we have the same amount of rays as horizontal pixels on screen this means that all the objects in the world gets rendered in vertical strips one pixel wide, with each ray being responsible for all the pixels in one strip or column on the screen.
This greatly simplifies the rendering because we have a fixed horizontal position on the screen where the object a ray hits will be placed so it's X coordinate on the screen doesn't have to be calculated. I.e. if we're tracing the third ray from the left then we know that whatever it hits will have an X coordinate of 3 on the computer screen.

What we then do is calculate how far a ray had to travel before striking an object, the vertical slice of the object rendered by that ray is then scaled accordingly. The longer the ray had to travel the shorter the rendered vertical slice will be. This is best exemplified if we look at how walls are rendered:

ray16
fig 3 The world as seen from the player's perspective. Here the player is looking at the corner of two walls. One running away from the player on the left and one cutting across the player's view on the right.

Walls like all objects are rendered in segments. In fig 3 the width of each segment has been exaggerated, they're only supposed to be one pixel wide. A segment is only defined by two things: It's horizontal placement on screen and the height of the segment. The height is defined by how far apart it's top and bottom end points are. The farther a ray had to travel to reach that wall slice the closer together the two end points will be. The middle of a wall segment will always be placed at the middle of the screen.
This means that the Y coordinates for where to place the two end points are very easily calculated. The segments center is at the middle of the screen and we scale it based on how far the ray hitting it had to travel.
Because we're not projected anything onto a screen like in normal renderer the whole perspective is faked. This fakery leads to some perspective problems that we have to work around.

The general idea is that the width of a rendered object depends on how many rays is hitting it and the height depends on the distance those rays had to travel. Let's look at the first problem we have to work around that has to do with how we decide the height.

Perspective correction: Height

Lets consider an object, say a box that moves perpendicular to the direction the player is looking. Maybe the player is side strafing and the box enters the players field of view from the left and then cuts right across the FOV and exists to the right. Since it's moving at right angles to the direction the player is looking it will both enter and leave the FOV at the same angle. We instinctively know that the way this should look is a box of constant size sliding across the screen from left to right.
But we said that the height of an object is determined by how far a ray that hit it had to travel and if we look at the length of the rays during the box's traversal we notice that they are longer when the box in at the edges of the FOV, equating to the periphery of the players vision, and shorter when the box is in the middle of the FOV. This means that the box will appear to change height depending on whether it's at the edges or center of the screen, giving a fish eye effect where objects in the center of the screen appear magnified.

ray4
fig 4 This is the box at three different positions during it's traversal. We only look at a single ray for each position as an example of all rays striking the box. The center ray clearly has to travel less distance than the two peripheral ones.

ray5
fig 5 This is how the box would be rendered on screen. It would grow in height until it reached the center of the screen and then begin to shrink again. This is because all the different vertical slices that was made up from would be longer in the center since the rays for each slice would have traveled a shorter distance.

The way to fix this is to not measure the length of the rays from the player but rather how "deep" into the FOV they have traveled. The way we'll do this is to project each ray onto the median line. The median line being an objective measure of how far down the FOV an object is.
The "projection" will be done by taking the dot product between the ray itself and median's unit vector. It's important that we use the median's unit vector because this will give us the proper magnitude the ray would have if projected onto it.

ray6
fig 6 Taking the dot product between the ray and the median's unit vector can be likened to shining a light at right angle towards the median and seeing where the ray's shadow fall on the median. Here the green line next to the blue median is the dot product and therefore a projection of the ray onto the median.

ray7
fig 7 By doing this for all ray's they will all have the same length as long as they end on the same perpendicular cross section of the FOV. Here the red line is a perpendicular cross section similar to the one that the box in fig 4 traveled on. The dot product between all the rays hitting this cross section and the median will have the same length which is represented by the dark green line next to the median.

The function for finding how deep into the FOV a ray is is rather simple since it's just a dot product.
1. Using the start and end position of the ray calculate a vector for it.
2. Take the dot product between this vector and the median line's unit vector.
3. The resulting dot product equals the magnitude or length of a new vector that's been projected onto the median.

	double ray_fov_depth( const double xUnitVecMedian, const double yUnitVecMedian, const double xRayStart, 
			const double yRayStart, const double xRayEnd, const double yRayEnd )
	{
1.		const double xVecRay = xRayEnd - xRayStart;
1.		const double yVecRay = yRayEnd - yRayStart;
2.		const double dot = xUnitVecMedian * xVecRay + yUnitVecMedian * yVecRay;

3.		return dot;
	}

The median line's unit vector can be calculated easily if we know the absolute angle inside the world in which it's pointing, all we have to do is take the sine and cosine of it. Sine and cosine will both return values between -1 and 1 because they both represent different catheti of a triangle existing inside a circle of radius 1. You can look into exactly how sine and cosine is calculated but suffice to say that taking the sine and cosine of an angle will give you the components of a vector which always has a magnitude or total length of one, meaning it's a unit vector.
Because I'm using raylib I also have to invert the y component of the vector. This is because in raylib the origin is placed in the top left side of the screen so y coordinates increase the farther down you go on the screen, as opposed to how most coordinate systems work where the y coordinate increase as you go up instead of down. This is why there's a minus in front of the sine function.

const double xUnitVecMedian = cos( medianAngle );
const double yUnitVecMedian = -sin( medianAngle );

Perspective correction: Width

The next problem has to do with how to space the rays and how wide an object appears on screen. The width of an object is decided by how much of the total FOV angle it covers. The more of the FOV taken up by a single object the wider it will appear. Take up the whole FOV it will stretch across the entire screen, take up 50% and it will cover half e.t.c.
Lets say we space the rays equally within the FOV so that there's the same number of degrees of separation between each ray. Meaning that if we have a FOV of 90° and 100 rays the shift in between each ray will be 90 / 100 = 0.9°.

We'll again use the example of a box sliding across the FOV and see what it's angle in the FOV is. Lets assume the box is 20 pixels wide and to simplify things flat and 100 pixels away in the Y axis.

I'm not going to show the exact calculations because it's irrelevant but when the box is at the left or right edge of the FOV it will cover 6.34° of it but when it's in the middle it will cover 11.42° of the FOV.
Meaning that when the box is at the player's periphery it will take up 6.34 / 90 = 7% of the screen's width but when it's in the center it will occupy 11.42 / 90 = 12.7% of the screen width. So the box will appear to expand lengthwise as it moves towards the middle of the screen and then contract, again resulting in a fish eye effect.
Let's visualize this.

ray8
fig 8 All the ray's in this FOV are spaced with the same angle between them. We'll assume the boxes are supposed to be flat and so only count the rays striking their bottom side. Only five ray's strike the box when it's at the periphery of the FOV but nine ray's strike it when it's at the center. Since each ray represents a horizontal position on the screen the box will be almost twice as wide when it's at the screen's center as when it's at the sides.

ray9
fig 9 This is what it would look like on screen.

What we need to do to correct this is instead of spacing our rays with equal angles between them we should space them so that the distance between them at any perpendicular cross section of the FOV is equal. Then as the box moves horizontally through the FOV the number of rays striking it will remain constant.

ray10
fig 10 We've now set the angle between the rays so that they will all be spaced with an equal distance across a cross section of the FOV, the red line, as long as that cross section is perpendicular to the FOV median. If we now count the rays we'll see that the box is hit with seven rays in all three positions along it's travel.
However now the angle between each ray isn't constant, the angle is much smaller between rays at the periphery compared to rays at the center.

The angle between the rays will not be constant but instead vary depending on how far from the median they are but how do we calculate that angle?
First we recognize that if we space the rays equally at any perpendicular cross section then because they move linearly they will remain equally spaced at any other cross section we make at other distances down the FOV. So let's make a cross section at some arbitrary distance along the median called M:

ray11
fig 11 Here we've made a perpendicular cross section represented by the red line at length M along the median. We also know the angle of the FOV, labeled A. What we want to calculate is the cross section length C.

In order to divide our rays equally along this cross section we first need to know it's length. The known measurements we have to work with are the distance M along the median and the FOV angle A. Notice that if we divide the FOV in half then we're dealing the two right angled triangles and from there it's a simple trig problem.

ray12
fig 12 After splitting the FOV in half we've now got two right triangles. The measurement M along the median is the same as before but for each triangle the angle B now equals half the FOV angle. What we want to calculate for each triangle is the length H which equals half the cross section length.

From trigonometry we know that:

tan( B ) = opposite / adjacent
tan( B ) = H / M
Opposite in this case is H which we want to know and adjacent is M which we do know. However as stated before we can place the cross section at any distance long the median we want and still get the same end result so what if we were to place it at a distance of "1"? Then "adjacent" which is our denominator would equal one and so would disappear:
tan( B ) = H / 1
tan( B ) = H
An M value of 1 is the same as using the unit vector for the median which as we've seen above is available to us. By picking a cross section at this median distance we'll then save ourselves the trouble of performing a division.
Now we know that the angle B equals half the FOV angle and we also know that H is only half the length of the cross section so we'll need to multiply it by two. The calculation to get a cross section length we need then is:
tan( FOV / 2 ) * 2 = C 

Which in this specific case comes out as:

tan( 90 / 2 ) * 2 = 2
The cross section length along which we'll need to space our rays equally will in this case be 2. We could now ascribe each ray a position along this line between 0 and 2 but it simplifies things if we instead make the range between -1 and 1, making the origin or 0 the tip of the median line.
The reason for this is that it will make all the rays to the left of the median line have a negative value for their cross section length, going from -1 to 0 in length, which in turn will make all angles for rays on the left side of the median negative. Since the angles are relative to the median this is good because it means that they will be subtracted automatically when added to the medians angle and the resulting angle will be turned to the left of the median. While the right side rays will have positive values for the cross section and as a result positive angles, turning to the right of the median angle.

ray13
fig 13 The total cross section length has been calculated as being 2. The origin point for the cross section line is chosen as being in the middle of it, right at the tip of the median line, marked 0.

By going from left to right our first ray will have a position along the cross section of -1 and the last ray will have a position of 1. It's this first ray's position we need for subsequent calculations because from this starting position we're going to step a bit to the right for each ray. Let's call this starting position K, to calculate it is easy:

-C / 2 = K

With our numbers:

-2 / 2 = -1

Now we can calculate where along the cross section line one of the rays intersects. First we need the distance or stepping between each ray, let's call this value S. This just amounts to dividing the total cross section length of 2 into the same number of pieces as we have rays. Say the number of rays are 100, then the stepping between each ray is:

C / Number of rays = S

Which in our example comes out as:

2 / 100 = 0.02

To calculate how the distance, call it D, from the 0 origin of the median line the ray is we use this formula where N is the current ray we want to find the position for:

K + N * S = D

Again for us this becomes:

-1 + N * 0.02 = D

-1 being the left most point as described above and N being the ray's number.

We now need to find out the angle each ray has. To be more precise we need to figure out the ray's absolute angle in reference to the game world but to get there we'll first calculate a relative angle offset from the median angle. Remember that the median represents the direction the player is looking in and is expressed as an absolute angle inside the game world.

Let's use ray number 42 as an example. Using the equation for getting it's distance D from the tip of the median line we get:

-1 + 42 * 0.02 = -0.16

The measurements we then have to work with is:

ray14
fig 14 We now want to find the angle "a" by which the ray is offset from the median.

Again we know from trig that:

angle = atan( opposite / adjacent ) 

Looking at fig 14 "adjacent" again is the median which we've prudently chosen a value of 1 for so we can get rid of the denominator and "opposite" is -0.16 which gives us an angle in radians of:

tan( a ) = -.16 / 1
a = atan( -.16 )
a = -.158655262
The 42nd ray will then have an offset from the median line of -.1587 radians. If we know the angle the median line has in the game world we could just add -.1587 to that to give us the absolute angle of the ray in relations to the game world and use that to see what it hits.

In order to begin we then first have to have a function that calculates a couple of values that will be used when casting a ray:

static double g_firstRayOffset; //The K value
static double g_rayStep; //The S value

void ray_init()
{
	double crossSectionLength = tan( screen_get_fov_radians() / 2 ) * 2; //The C value
	g_firstRayOffset = -crossSectionLength / 2; //The K value
	g_rayStep = g_crossSectionLength / screen_get_width(); //The S value
}

Here we calculate all three values we need and put the first ray offset and ray stepping into global variables. Our raycasting algorithm can then use them to get the angle offset for a single ray like this:

atan( g_firstRayOffset + rayNbr * g_rayStep ); //K + N * S = D

Caution

When reading up on raycasting I often came across language describing the cross section that you have to make to the FOV like in fig 7 as "projection planes" or "screen planes" and sometimes with camera terms like focal points thrown in there. This confused me a lot because I then tried to understand what was happening in terms of an optical model like screen projections and how cameras work. This line however in no way functions like a virtual screen. Nothing is being projected onto it as is evident by the fact that objects between the player and this cross section will also be rendered to the screen, not just objects in front of it. Things cannot be projected on both sides of a virtual screen. The only thing it has in common with the actual screen is the orientation, it's always perpendicular to the direction the player is looking but it's size and placement has no effect on the rendered image and where this cross section is made is arbitrary.

The way I like to think of this line and how raycasting works in general is that you're constructing a trigonometrical jig that you then use for spacing the rays and measuring ray distance in order to give a desired result.
Even though the ray in raycasting would imply that what's going on is in some way optical raycaster simplifies things to the point that normal projection principles do not apply. What's going on is more that you're feeling out what's in front of the player and this data then has to be approximated to what an image of these different parts would look like. This makes it extremely fast but also means that we end up with problems that needs to be worked around. The two main problems are as stated above:
1. We can't measure the absolute ray distance because that causes a fish eye effect in the vertical direction, with the height varying when it shouldn't.
2. We can't space the rays with equal angles between them because this again causes a fish eye effect but this time for the horizontal direction.
I have chosen certain ways of correcting these errors but there are many different ways of fixing them. You can find some that works very differently from the way I do thing here and I'm not saying that other ways of doing it are wrong or even worse but what I'm saying is that you should recognize these fixes for what they are which are essentially jigs for placing things in their proper positions. There is no camera or virtual screen to project onto, the entire perspective is faked which makes some of the methods we have to employ to fake it seem highly abstract and difficult to grasp since they don't apply to a physical thing like a camera or screen that we can understand. But trying to think of them in concrete terms instead of recognizing them for the mathematical tricks that they are is just going to confuse the reader.

Scale

An object should scale in size depending on how close it is to our player. As it recedes it should become smaller and when it comes closer it should grow in size. Generally speaking the function you use to scale objects in 3D is:
1 / distance = scale
Distance being how far away from our player an object is. So as the distance between the player and the object grows larger the scaling for the object approaches zero. However if the distance is measured in pixels then having 1 as the numerator in the equation might not be such a good idea. It'll mean that the object is only displayed at it's natural scale of 1 when it's one pixel away from the player and will also mean that it'll shrink in size very rapidly as it recedes from the player. What we need is a larger numerator and therefore let's think about how close an object can really get to our player.
Our player is represented by a square and the closest an object then can get is half the length of one of the walls of the square:

ray15
fig 15 From the perspective of a square with sides 10 pixels long the closest an object can come to it is 5 pixels.

So now we know how close an object can get but how large should we scale it to when it's at this distance? It's really up to the programmer. If we want it to be displayed without any scaling we would pick a numerator of half the length of one of the square's walls. In the above examble this would be a numerator of 5. 5 / 5 = 1 so when an object was pressing up against the player if would be scaled to it's default size. I prefer to have walls and objects scaled to twice their size when the player is pressing up against them since this makes it seem like you are a lot closer to them by making them more pixelated. In order to scale them to twice the size we would choose a numerator of twice the minimum distance so in this case 10. 10 / 5 = 2 which is the scale we wanted when we're pressed up against something. If you want the walls to appear larger when the player gets close to them you just increase the numerator.

Walls

How we render walls is going to be fundamental for all other rendering. Remember that we rendered a wall in vertical slices each ray being responsible for one slice.
Where a slice is put horizontally on the screen is determined only by which ray hit it and the slice itself is then defined by it's two vertical end points on the screen. How far apart these two points are then depends on how far away the wall is, the farther away the closer together the two points will be and subsequently that particular wall segment will be shorter.

As we can see from fig 3 the farther away a wall segment is the closer together it's two end points will be and they will meet at the middle of the screen as the wall segment shrinks to nothing. The middle of the screen thus represents a natural minima for how close these end points can be to one another but how far apart can they get?
The screen height would be an obvious maxima for how far apart these two points can be. If the scaling then goes from 1 to 0 depending on how far away the wall is then we want the end points to be at the top and bottom of the screen at scale 1, making the wall fill the entire screen and both end points be at the middle of the screen when the scale is 0.
Since we cannot look up or down the points will always be mirror images of one another so we'll calculate the screen position for the top point of a wall and then mirror it:

const int yScreenTop = screenHeight / 2.f - ( screenHeight / 2.f ) * scale;

Here we can see that when scale is 0 the right most term disappears and we'll be left with screenHeight / 2. So when scaling is 0 the point will be placed at the middle of the screen. When scaling is 1 we'll get the expression "screenHeight / 2 - screenHeight / 2" which of course will come out as zero which is at the top of the screen. To calculate the bottom most point we can simply subtract it's y position from the screen height which mirrors them across the middle of the screen:

const int yScreenBottom = screenHeight - yScreenTop;

Wall scale

You might want the wall to stretch beyond the vertical screen borders as the player gets closer to it in order to give a sense of the wall being very tall. This is already accomplished by making the "scale" variable go beyond 1. Remember from above that you can manipulate the numerator in the scale equation to give it a value greater than 1? I prefer to make the largest possible scale value 2 and so if we input 2 into our yScreenTop equation we get:

screenHeight / 2 - ( screenHeight / 2 ) * 2 = -screenHeight / 2

So now the top most end point for a wall will no longer be the top of the screen but half the screen height beyond the top of the screen. The bottom end point will resultantly be half the screen height below the bottom of the screen. This will give the effect of the player seeming to be very close to the wall.

You could of course instead change the denominator for the "screenHeight / 2" in the scale equation to be less than 2. What you're doing then is saying that the "natural" size of wall when scaling is 1 is no longer equal to the screen's height. So there's really two ways to manipulate wall scaling. First being with the scaling numerator and the second one being setting the wall's size at scale 1 to be bigger or smaller than the screen height.
It comes down to preference but I go with the scaling numerator.

Tracing rays across the map

The map system I'm using is based on tiles and so the ray's starting position is the tile the player is standing on and can therefore be considered clear. We then need to trace the ray into the next tile and check if that tile is an obstacle. If it is then we draw a vertical wall segment for that ray based on how far we got and if the tile is clear then we trace it to the next tile.

ray17
fig 17 The player's position is the large red circle in the bottom left tile. Here we trace a single ray across the map. It's starting position is naturally the player's position. We then trace it to where it exits the tile which will result in getting a collision point along one of the tile's four walls, represented by the first of the smaller red circles. When we hit a border wall we reset the ray's position to this collision point inside the new tile and trace to where it exits into the next tile. This continues until we hit the solid tile second top from the right, then we'll calculate how far the ray traveled and draw a segment of wall based on that.

There are several ways of doing this tracing but I've already made a collision detection system that can be adapted for this purpose. As I've written a description of how it works here I won't go into too much detail.

ray18.png
fig 18 We'll start out in the same tile as the player and use the ray's vector to see which of the four tile "walls" or more accurately borders it collides with. Here we can see that it collides with the right most border. Quick recap of how the collision detection works: The walls around a tile are described as vectors running clockwise and the collision function will return the vector of the wall it intersected along with a percentage of where along the wall the collision occured.
In this example since the ray's colliding with the right wall what we would get back is a vector pointing down and a percentage of 30% which is how far from the vector's starting point the ray intersected.
Because the vector returned from the collision algorithm points down we can deduce that it's the right wall and therefor we've entered the tile to the right of our starting position. This will come into play later.

Coordinate system

We need to have a quick discussion about how the coordinate system works since this simple thing often trips me up.
The way Raylib and I believe most toolkits handle screen coordinates is that the top left point of the screen is the origin. X coordinates then increase as we move right on the screen and Y coordinates increase as we move down. Contrast this to a regular Cartesian system where we have the Y coordinates increase as we move up.

This affects us because when tracing a ray into a new tile we need to get the coordinate where it entered that tile in order to set that as the new starting position and trace it into the next tile. In order to do that we're going to start with relative coordinates expressed in percentage of where in the new tile the ray emerged. This relative coordinate system will have to follow how the main coordinate system works with having the origin in the upper left corner.

ray19
fig 19 Origin for our blue tile's relative coordinates is also in the tile's upper left corner. X coordinates move right from 0 to 100%. Y coordinates move down from 0 to 100%. Let's say that a ray entered the blue tile from above right in middle of the tile. It's relative coordinates at the point it entered would be Y 0% and X 50%. This is the sort of data we'll get out of the collision algorithm that we then need to turn into an absolute coordinate in order to set our new starting position.

Let's again look at the example in fig 18. We've determined that the ray has entered the tile to the right of the one we've started in. Since the ray has emerged on the left hand side of this new tile we know that the relative X position in this new tile is 0%, we're all the way on the left of it.
The vector describing the wall we collided with ran downwards, top to bottom meaning we can use the percentage of where the collision occurred on it to get our relative Y coordinate in the new tile. As we can see from fig 18 the collision happened 30% from it's starting point which means that our Y position in the new tile is 30% down from the top left corner of the new tile.
I hope this is somewhat clear but what if the ray had moved left instead?

ray20
fig 20 Now the ray has entered the tile to the left of our starting position. Again the collision has occurred 30% from the start of the vector describing the tile's wall but this time the vector moves upwards, bottom to top.

As we can see from fig 20 we've now entered a different tile and this time from it's right hand side. This will mean that our relative X coordinate in this new tile is 100%, all the way to the right in it. Because the vector in this case runs in the wrong direction, upwards compared to our Y coordinates which run downwards we'll have to invert the point of collision in order to get the correct Y coordinate inside the tile. So we take 100% - 30% = 70% meaning we are 70% down from the new tile's top left corner.

So if the vector we've collided with runs in the opposite direction of our coordinate system then we need to invert the percentage given back from the collision algorithm that describes where along this vector the intersection occurred. Further more the percentage represents either the relative X or Y component of our ray's position where it emerges in the new tile depending on whether the collision is with a vertical or horizontal vector.
Had we for example intersected the bottom border the percentage given from the collision algorithm would describe the relative X component in the new tile and would have to be inverted because the bottom vector runs to the left which is the opposite of the coordinate system in fig 19.

I find this confusing myself sometimes but hopefully it becomes clearer below when we get into the code.

Back to tracing rays...

Using the components of the vector from collision detection we can now figure out which wall it's collided with, the up, down, left or right hand wall.
Because the wall vectors are all cardinal we only need to check one of their components to determine what direction it's pointing in. E.g. a wall vector with a positive X component will be pointing right, and we know all the wall vectors run clockwise around the tile so a vector pointing right means it's the top most wall we've collided with.
Similarly if the vector has a positive Y component we know it points down and so must represent the right wall of the tile.
When we've determined what side of the tile we've collided with we move the ray into the next tile in this direction and calculate a new starting position for it.

1. Calculate the pixel position for the center of the tile we're currently in. This is done by taking the tile index multiplied by how wide a tile is in pixels plus half the width of a tile in pixels.
2. Get the four walls or borders surrounding the tile. This means getting coordinates for their two end points as well as a vector describing each of them.
3. See which wall the ray's vector intersects.
4. Output the vector of the wall it intersected as well as the point of intersection in percent.

	void ray_trace_to_tile_edge( const int xTile, const int yTile, const double xCenter, const double yCenter, 
			const double xVec, const double yVec, const int tileSize, const int tileHalfSize, 
			double *dxWallOut, double *dyWallOut, double *prcWallOut )
	{
1.		const double xTileCenter = xTile * tileSize + tileHalfSize;
1.		const double yTileCenter = yTile * tileSize + tileHalfSize;
2.		double walls[2][8];
2.		const int nbrWalls = collision_get_walls( xTileCenter + xVec, yTileCenter + yVec, 0, xTileCenter, yTileCenter, tileHalfSize, walls );

		for( int i = 0; i < nbrWalls; i++ ) {
			const double p1x = walls[ i ][ X_COORD_1 ]; 
			const double p2x = walls[ i ][ X_COORD_2 ];
			const double p1y = walls[ i ][ Y_COORD_1 ];
			const double p2y = walls[ i ][ Y_COORD_2 ];
			const double dxWall= walls[ i ][ X_VEC ]; 
			const double dyWall= walls[ i ][ Y_VEC ];

3.			double prcWall;
3.			const enum collisiontype collwall = collision_intersection( p1x, p1y, p2x, p2y, xCenter, yCenter, xVec, yVec, &prcWall ); 
3.			if( collwall == NO_COLLISION ) continue;

4.			*dxWallOut = dxWall;
4.			*dyWallOut = dyWall;
4.			*prcWallOut = prcWall;
		}

Now that we've determined which wall of the tile the ray has intersected and we've gotten a vector describing it we can use that vector to figure out which tile the ray has crossed into.
1. "xTile" and "yTile" in the function arguments have been set by the calling function to the current tile we're in. We'll then add of subtract to these indices depending on which adjacent tile the ray moves into.
2. If the Y component of the wall vector is going up we know we've collided with the left hand wall and so:
2.1. Move to the left of our starting tile.
2.2. Since we've moved left that means we've entered the new tile on it's right hand side. Due to how the coordinate system works our relative X position on the tile is at 100% the width of the tile, which is 1 in decimal.
2.3. The percentage given by the collision algorithm thus represents the relative Y position in the new tile. Because the vector for the tile's left side wall that we've detected a collision for points upwards, bottom to top this means we must invert the percentage where the collision occurred by taking 1 minus the percentage.
3. The wall vector's Y component is positive so it's the right hand wall of the tile we're in.
3.1. Move right one tile position.
3.2. We've entered the new tile from it's left side so our relative X position is 0%.
3.3. The wall vector is running downwards same as our coordinate system so we can directly use the percentage of where the ray intersected it as our relative Y position.
4. Wall vector has a positive X component so it's the top wall.
4.1. Move up one tile.
4.2. Set the relative X position in the new tile to where the ray intersected the wall vector.
4.3. Set the relative Y position to 100%.
5. X component is negative for the wall vector so it's the bottom wall.
5.1. Move down one tile.
5.2. Invert the intersection percentage to get the relative X position in the new tile.
5.3. Set the relative Y position to 0 since we're at the top of the new tile.
6. Now we've determined X and Y index of our new tile and a relative position in percent describing an entry point for our ray in it. From this we can calculate an absolute position in the world where the ray entered into this new tile.
To do so we take the tile index and multiply it by the tile size in pixels which puts our absolute position at the upper left corner of the tile. Then we can take the relative percentage of where along it's borders we've entered and again multiply this by the tile size in pixels. Add these two things together and we have an absolute entry position for our ray.

void ray_get_end_pos( const double xVecWall, const double yVecWall, const double prcWall, const int tileSize, 
1.		int *xTile, int *yTile, 
		double *xPosAbs, double *yPosAbs )
{
	double xPosRelPrc, yPosRelPrc;
2.	if( yVecWall < -.1 ) {
2.1		*xTile -= 1;
2.2		xPosRelPrc = 1;
2.3		yPosRelPrc = 1 - prcWall;
	}
3.	else if( yVecWall > .1 ) {
3.1		*xTile += 1;
3.2		xPosRelPrc = 0;
3.3		yPosRelPrc = prcWall;
	}
4.	else if( xVecWall > .1 ) {
4.1		*yTile -= 1; 
4.2		xPosRelPrc = prcWall;
4.3		yPosRelPrc = 1;
	}
5.	else { //( xVecWall< -.1 )
5.1		*yTile += 1;
5.2		xPosRelPrc = 1 - prcWall;
5.3		yPosRelPrc = 0;
	}

6.	*xPosAbs = *xTile * tileSize + tileSize * xPosRelPrc;
6.	*yPosAbs = *yTile * tileSize + tileSize * yPosRelPrc;
}

Recapping and code

Now that we've got that out of the way let's go through the code for the things discussed so far.

This is where our raycasting algorithm will begin. The function arguments are:
"x" and "y" is the player's position.
"medianAngle" is the angle at which he's looking.
"halfSize" is half the length of one side of the square that describes our player.
1. Calculate a unit vector for the median line as explained above.
2. Calculate which tile the player is standing in. Meaning it's X and Y index.
3. Clear an image buffer. Some explanation for this: Raycasting is entirely done in software and it draws one pixel to the screen at a time. Modern hardware is not at all happy with this approach and so if you actually try and use the Raylib drawing functions to draw one pixel at a time the performance will tank. To get around this you create an "image" which is just a pixel grid the same size as the screen and then change the colors of each pixel of this grid when you want to draw a pixel. Once all the pixels are set on this grid you convert it to a texture and render it to the screen.
A bit convoluted and the implementation will change if you use a different library than Raylib.
4. Cast all the rays. The first argument to this function is 0 meaning the first ray. The function is recursive and will call itself with an incremented ray number until the ray number equals the horizontal screen resolution.
5. Convert the pixel grid to a texture.
6. Draw this texture to the screen.

void ray_draw( const double x, const double y, const double medianAngle, const double halfSize )
{
1.	const double xUnitVecMedian = cos( medianAngle );
1.	const double yUnitVecMedian = -sin( medianAngle );

2.	const int xTile = x / level_get_tile_size();
2.	const int yTile = y / level_get_tile_size();

3.	draw_fill_image( screenImage, 0, screenImage.width * screenImage.height, RAYWHITE );

	BeginDrawing();

4.	ray_cast_ray( 0, x, y, medianAngle, xUnitVecMedian, yUnitVecMedian, halfSize * 2, screen_get_width(), screen_get_height(), 
		level_get_tile_width(), level_get_tile_height(), level_get_tile_size(), level_get_tile_half_size() );	

5.	UpdateTexture( screenTex, screenImage.data );
6.	DrawTexture( screenTex, 0, 0, WHITE);
	EndDrawing();
}

Let's look at the raycasting function.
1. If the ray number equals the vertical screen resolution then we're off the screen and so can quit.
2. Calculate the relative offset angle from the median for our ray.
3. Use this to calculate an absolute angle for the ray in the world.
4. Calculate a unit vector describing the ray.
5. Trace the ray until it hits a wall. This function returns the Y coordinate for the top and bottom points of the vertical screen segment where the wall is drawn. These return values are used when drawing the floor and ceiling which isn't currently in this tutorial.
6. Cast the next ray by incrementing the ray number.

void ray_cast_ray( const int rayNbr, const double xPlayer, const double yPlayer, const int xTile, const int yTile, 
		const double medianAngle, const double xVecMedian, const double yVecMedian, const double scaleOneDist, 
		const int screenWidth, const int screenHeight, const int levelTileWidth, const int levelTileHeight, 
		const int tileSize, const int tileHalfSize )
{
1.	if( rayNbr == screenWidth ) return;

2.	const double rayAngleRel = atan( g_firstRayOffset + g_rayStep * rayNbr );
3.	const double rayAngleAbs = medianAngle - rayAngleRel;

4.	const double xRayVec = cos( rayAngleAbs );
	//because raylib has an inverted y axis for the screen we need to invert the y vector hence -sin
4.	const double yRayVec = -sin( rayAngleAbs );

5.	int yScreenTop, yScreenBottom;
5.	ray_trace_to_wall( rayNbr, xTile, yTile, xPlayer, yPlayer, xRayVec, yRayVec, xVecMedian, yVecMedian, scaleOneDist, 
		screenWidth, screenHeight, levelTileWidth, levelTileHeight, tileSize, tileHalfSize, &yScreenTop, &yScreenBottom ); 

6.	ray_cast_ray( rayNbr + 1, xPlayer, yPlayer, xTile, yTile, medianAngle, xVecMedian, yVecMedian, scaleOneDist, 
		screenWidth, screenHeight, levelTileWidth, levelTileHeight, tileSize, tileHalfSize );
}

Now we get to the function that actually traces a ray to a wall.
1. Use the collision algorithm to detect which of the four walls of the tile we're currently in that the ray intersect. This returns a vector for that wall and also a percentage of where from the vector's starting point that the ray intersected.
2. Based on which direction the wall's vector is pointing we then can then deduce which tile the ray has moved into.
2.1. We get an X and Y tile index for this new tile.
2.2. And also an absolute X and Y position in the world where the ray entered this new tile.
3. Check if the ray exited the map without hitting a wall.
4. Check if the ray hit a wall.
5. If the ray is not outside the map and has not hit a wall then we trace it's path into another tile.
6. If it has hit a wall or is outside the map then we need to get a top and bottom Y coordinate for the wall segment's end points on screen.
6.1. Calculate how far the ray managed to travel.
6.2. Based on the distance traveled calculate a scale for the wall segment. The "scaleOneDist" variable has been set by previous functions to equal half the length of the one of the side of a square describing the player as explained above.
6.3. Calculate the Y coordinate for the top point of the wall's vertical screen segment.
6.4. Invert this position to get the bottom Y coordinate.
6.5. Return these two Y coordinates. This is again for use by the floor and ceiling drawing function which is not currently described in this tutorial.
6.6. If we've hit a wall then we need to draw a part of that wall's texture onto the vertical screen segment we've just calculated. The drawing will be explained next. If the ray simply exited the map then nothing will be drawn and whatever color of background we cleared our pixel buffer with previously will be shown.

void ray_trace_to_wall( const double xScreen, const int xTile, const int yTile, const double xRayStart, const double yRayStart, 
		const double xRayVec, const double yRayVec, const double xUnitVecMedian, const double yUnitVecMedian, 
		const double scaleOneDist, const int screenWidth, const int screenHeight, const int levelTileWidth, 
		const int levelTileHeight, const int tileSize, const int tileHalfSize, int *yScreenTopOut, int *yScreenBottomOut )
{
1.	double dxWall, dyWall, prcWall;
1.	ray_trace_to_tile_edge( xTile, yTile, xRayStart, yRayStart, xRayVec, yRayVec, tileSize, tileHalfSize, &dxWall, &dyWall, &prcWall );

2.1	int xTileNew = xTile;
2.1	int yTileNew = yTile;
2.2	double xRayEnd, yRayEnd;
2.	ray_get_end_pos( dxWall, dyWall, prcWall, tileSize, &xTileNew, &yTileNew, &xRayEnd, &yRayEnd);

3.	const bool outsideMap = xTileNew < 0 || yTileNew < 0 || xTileNew >= levelTileWidth || yTileNew >= levelTileHeight;
4.	const bool hitWall = outsideMap ? false : level_get_obstacle( xTileNew, yTileNew ) == SOLID;
	
5.	if( !outsideMap && !hitWall ) ray_trace_to_wall( xScreen, xTileNew, yTileNew, xRayStart, yRayStart, xRayVec, yRayVec, 
		xUnitVecMedian, yUnitVecMedian, scaleOneDist, screenWidth, screenHeight, levelTileWidth, levelTileHeight, 
		tileSize, tileHalfSize, yScreenTopOut, yScreenBottomOut ); 
6.	else {
6.1		const double rayFovDepth = ray_fov_depth( xUnitVecMedian, yUnitVecMedian, xRayStart, yRayStart, xRayEnd, yRayEnd );
6.2		const double scale = scaleOneDist / rayFovDepth;

6.3		const int yScreenTop = screenHeight / 2.f - ( screenHeight / 2.f ) * scale;
6.4		const int yScreenBottom = screenHeight - yScreenTop;
		
6.5		*yScreenTopOut = yScreenTop;
6.5		*yScreenBottomOut = yScreenBottom;

6.6		if( hitWall ) ray_draw_wall( xScreen, yScreenTop, yScreenBottom, prcWall, texWall, screenWidth, screenHeight );
	}
}

Drawing the wall texture

Once a ray has hit a wall the result on screen is a single vertical segment one pixel wide. What we draw in this segment then has to be a corresponding pixel wide vertical slice from the texture associated with that wall. We will then scale this segment's height up or down to fit in the segment on the screen but how do we find which pixel wide segment of the texture to choose in the first place?

ray21
fig 21 For our single pixel wide segment we want to draw on screen we need to find a single pixel wide segment on a texture to draw from, the red line here.
Texture is made by PamNawi CC-BY 4.0

If you look at fig 18 you see that all the tile walls surrounding our player runs left to right from his perspective. Now say that the blue tile to the right of the player had been solid. If so then the texture of that wall that we need to render a strip from would exist right where the arrow that our ray intersects is, the arrow pointing down. The left most point of the texture would be at the start of the arrow and the right most point would be at the arrow's end.
Now the texture would technically be a wall of the blue tile and the arrow is a wall of the tile that the player is in but this is irrelevant because they exist in the same space since the tiles are contiguous.
The ray's intersection point of 30% thus also tells us how far from the left side of the texture the vertical slice we're looking for is. This holds true for all walls because from the player's perspective the collision point we get back in percent always describes a point from the wall's left most side.
After a collision is detected we can then use the last intersection percentage we got from the collision algorithm to calculate the X coordinate to place our strip on the texture. The calculation for it is:

( textureWidth - 1 ) * intersectionPercentage = yPositionOnTexture 

The reason for subtracting 1 from the texture's width is that pixel count starts from 0 and not 1 and so in order to keep withing valid pixels we to reduce the width by one. E.g. if the texture is 100 pixels wide the valid horizontal pixels or X position of the texture is from 0 to 99. Let's look at how a texture 100 pixels wide and an intersection point of 30% would look like in our equation:

( 100 - 1 ) * .3 = 29.7

A value of 29.7 could be rounded up to 30 or we just place it in an int and drop the fraction for a pixel of 29 instead.

A vertical strip only has one X position but multiple Y positions so how do we figure those out? If our texture is 100 pixels tall but our strip we need to fill in on screen is only 25 pixels tall we obviously cannot draw all 100 vertical pixels within that strip. What we can do is take 25 "steps" vertically along our texture and sample a pixel at each step and then put that on screen.

ray22
fig 22 The left image is the size of the wall as rendered on screen. The height of this strip on screen is 7 pixels. What we need to do is to sample 7 pixels vertically along a strip on the larger original texture on the right.

This will give a shrunken version of the texture. Starting from a pixel Y position of 0 on our texture we then need to step downwards a certain number of pixels for each of our 25 on screen pixels. Calculating this is simple:

textureHeight / screenStripHeight = pixelStep 

The first pixel we draw on screen will correspond to the top pixel in our texture. If the screen texture was 25 pixels tall and our texture 100 pixels tall then each step would be 100 / 25 = 4 pixels.
From this we can then calculate the Y position on the texture that we need to sample our 25 screen pixels:

Screen pixel 0 = 4 * 0 = Texture pixel 0
Screen pixel 1 = 4 * 1 = Texture pixel 4
Screen pixel 2 = 4 * 2 = Texture pixel 8
Screen pixel 3 = 4 * 3 = Texture pixel 12
...
Screen pixel 22 = 4 * 22 = Texture pixel 88
Screen pixel 23 = 4 * 23 = Texture pixel 92
Screen pixel 24 = 4 * 24 = Texture pixel 96

Let's look at the code. First we have a function of setting up the values needed before drawing the pixels:
1. Calculate how many pixels there are between the top and bottom of our vertical strip on screen. Note that we add one to get the width, if for example our strip is between Y coordinates 10 and 20 we have 11 pixels which we need to add one to the subtraction to get.
2. Calculate the vertical pixels stepping.
3. Calculate the X position on the texture for our strip. Here "xTexPrc" is the percentage where the ray intersected the tile wall, which we got from the collision algorithm.
4. Start drawing the first pixel.

	void ray_draw_wall( const int xPosScreen, const int yPosScreenStart, const int yPosScreenEnd, 
			const double xTexPrc, const Image tex, const int screenWidth, const int screenHeight )
	{
1.		const int pixels = yPosScreenEnd - yPosScreenStart + 1;
2.		const double texStepPxl = ( float ) tex.height / pixels;
3.		const int xPosTex = ( tex.width - 1 ) * xTexPrc;

4.		ray_draw_wall_draw_pixel( 0, pixels, xPosScreen, yPosScreenStart, tex, texStepPxl, xPosTex, screenWidth, screenHeight ); 
	}

Here's how the pixel drawing works:
1. If our pixel number equals the total height of the strip on screen we end the drawing.
2. The pixels are drawn top to bottom so if the on screen position is equal to the screen resolution we're drawing outside the screen, remember that if the screen is 100 pixels tall the last drawable one is 99, all subsequent pixels will be drawn off screen so we might as well stop.
3. If the pixel we're drawing is above the screen we need not draw it but subsequent pixels might be visible farther down so we skip ahead one pixel and try again.
4. Calculate a Y position on the texture that we need to read from based on the stepping.
5. Pull a color from the texture's pixel grid at the X and Y coordinates we have just calculated.
6. Put that color onto a pixel in our screen buffer.
7. Increment one pixel number and one Y position down on the screen and draw another pixel.

	void ray_draw_wall_draw_pixel( const int pixel, const int pixelLast, const int xPosScreen, const int yPosScreen, 
			const Image tex, const double texStepPxl, const int xPosTex, const int screenWidth, const int screenHeight )
	{
1.		if( pixel == pixelLast ) return;
2.		if( yPosScreen == screenHeight ) return;
3.		if( yPosScreen < 0 ) goto nextIteration;

4.		const int yPosTex = texStepPxl * pixel;
5.		const Color texColor = *( ( Color* ) tex.data + xPosTex + yPosTex * tex.width );
6.		*( ( Color* ) screenImage.data + xPosScreen + yPosScreen * screenWidth ) = texColor;

3.		nextIteration:
7.		ray_draw_wall_draw_pixel( pixel + 1, pixelLast, xPosScreen, yPosScreen + 1, tex, texStepPxl, xPosTex, screenWidth, screenHeight );
	}

Conclusion

Not quite on par with the Unity engine but there's something satisfying about having written the whole engine yourself and knowing how all the code works.
Some things like jumping or limited use of looking up and down can be implemented easily by adding some offsets to the segments.
Floors and ceilings are a bit more complicated and I'll put up a separate page explaining those later. Drawing characters shouldn't be that difficult either, might add that too later and some sort of scrolling background drawing would have to be implemented for more open maps.