Skip to content

Pathfinding Changes#4934

Merged
ArturKnopik merged 29 commits intootland:masterfrom
NRH-AA:Pathfinding_Modifications
Jul 1, 2025
Merged

Pathfinding Changes#4934
ArturKnopik merged 29 commits intootland:masterfrom
NRH-AA:Pathfinding_Modifications

Conversation

@NRH-AA
Copy link
Contributor

@NRH-AA NRH-AA commented Jun 20, 2025

Performance enhancements to pathfinding algorithm. Remove new from node construction to help with memory problems.

  1. Nodes are stored in each data structure using a hash of their x,y coordinates for quick look up.
  2. Added a visited set so we do not repeat nodes.
  3. Moved open nodes to a priority queue that stores them based on their f value for faster f value look up. (No more harsh sorting)
  4. Added a sightline check to optimize clear paths performance.
  5. Optimized heuristic calculation which increased performance on all paths.
  6. Fixes [Bug]: Fleeing creatures aren't properly taking distance steps #4884

@NRH-AA NRH-AA changed the title Initial Commit : Possible pathfinding changes Possible pathfinding changes Jun 20, 2025
@NRH-AA NRH-AA mentioned this pull request Jun 20, 2025
3 tasks
@NRH-AA
Copy link
Contributor Author

NRH-AA commented Jun 22, 2025

I think this is ready if anyone is looking. I tested it all day. I will create a way for me to test millions of paths sometime tonight so I can test for memory leaks. The only other thing I might try to do is remove x and y from the actual node to save a little more memory. I saw someone say that and it may be possible. Though it won't be super easy. We do use those values and a lot of the code is tied to it. So don't hold this back waiting for it haha.

I ran the following test.

local talkaction = TalkAction("/paths")

function talkaction.onSay(player, words, param)
	if player:getAccountType() < ACCOUNT_TYPE_GOD or player:getGroup():getId() < 7 then
		return false
	end
	
	local startPos = player:getPosition()
	
	local before = os.clock()
	
	local successful = 0
	local iterations = 10000
	
	for i = 1, iterations do
		local randomX = startPos.x + math.random(-9, 9)
		local randomY = startPos.y + math.random(-9, 9)
		
		local targetPos = Position(randomX, randomY, startPos.z)
		if (player:getPathTo(targetPos)) then
			successful = successful + 1
		end
	end
	
	local after = os.clock()
	
	print(string.format("Pathfinding took %0.8f seconds to run %d times. %d paths were successful", after - before, iterations, successful))
	return false
end

talkaction:separator(" ")
talkaction:register()

which resulted in this output. Memory did not increase indicating a memory leak:
Pathfinding took 0.790000 seconds to run 100000 times. 99427 paths were successful

My CPU usage increased about 0.5-1% to calculate the paths and memory remained where it started.

After running 1m paths through this code (all at the same time) the result is this:
Pathfinding took 7.558000 seconds to run 1000000 times. 1000000 paths were successful

The memory for the server actually decreased after the test assuming it freed memory on something else. I really doubt there is a memory leak at this point.

@NRH-AA
Copy link
Contributor Author

NRH-AA commented Jun 22, 2025

Results of changing the heuristic a little:

Pathfinding took 6.77500000 seconds to run 1000000 times. 1000000 paths were successful
Pathfinding took 6.81500000 seconds to run 1000000 times. 1000000 paths were successful

ALSO... I did a clean install of the old TFS algorithm. As if we reverted my last pull request. Here is the result:

Pathfinding took 18.52000000 seconds to run 1000000 times. 459925 paths were successful
Pathfinding took 19.84900000 seconds to run 1000000 times. 438003 paths were successful

A pretty clear sign that this version is faster, uses less memory, and no more memory leaks! NICE!

The low success rate is due to the old version of maxSearchDist. It was only like 15 or so before we increased it to clientViewX+Y and I am checking 9x9 around the player so a lot of paths end up being too far away. With that being said, the old version didn't handle paths outside its range well and still tried to calculate the path which means we did 250 iterations to find out we couldn't make the path. Between a slower implementation and all of the non existent pre-path conditions that are in my new code it takes almost 4x as long to run the paths.

Last check on the old tfs algorithm I reduce the path sizes to 5x5 to try and get more successful paths. It was still slower than this new algorithm on half the size of paths.
Pathfinding took 9.88900000 seconds to run 1000000 times. 966979 paths were successful

@Codinablack
Copy link
Contributor

I am just giving an opinion here.. take it or leave it... but if you are going to continue to optimize and fine hone this pathfinding algo, might I suggest you research JPS, as it's supposed to be an enhancement to A* which supposedly dramatically increases performance... at least for sure on big grids/nodemaps... it may or may not be worth implementing for our much smaller use cases, I have no idea... another possible optimization I thought of, but never really checked to see if it could be applied, was to limit the distance the node search continues away from the creature based on max viewport.. Anyways, good luck and hope you keep killing it on these work you are doing!

@gesior
Copy link
Contributor

gesior commented Jun 23, 2025

Results of changing the heuristic a little:

It looks like optimized algorithm. I will try to benchmark it later.

Problem with path finding CPU usage is not players walking, but monsters walking.
You should not benchmark player walking to random position. Benchmark every monster spawned walking to random position on screen (maybe better not random, but to each tile on screen to keep tests more consistent), so there will be many positions to which they cannot walk and a lot of map to load from RAM (not CPU cache).
Your benchmark reports 100% success rate, but there are many cases, when there is no way to target on OTS (ex. player and monster are in separate tunnels) and these are the worst - you must calculate path until algorithm hits limit ex. 250 iterations, to be 'sure' that there is no way.
Problem with benchmarking 'valid path' 100% of time is that CPU branch and cache predictions will heavily optimize that branch/cache and it may increase speed few (!) times, but in real world with multiple fails it will work slower.

It would be also good to print total number of 'steps to target' calculated, because it does not matter, if algorithm is faster, if it calculates longer (not optimal) paths more often.

@NRH-AA
Copy link
Contributor Author

NRH-AA commented Jun 23, 2025

Problem with path finding CPU usage is not players walking, but monsters walking. You should not benchmark player walking to random position. Benchmark every monster spawned walking to random position on screen (maybe better not random, but to each tile on screen to keep tests more consistent), so there will be many positions to which they cannot walk and a lot of map to load from RAM (not CPU cache).

This is something that would make either algorithm slower. I understand the idea you are presenting, but if an algorithm runs faster even if it is because of CPU cache it would almost always run faster in a more "real world" environment. I will mention the original pathfinding system in TFS wasn't "slow" it just needed optimizations to how it worked. With just a few if statements in the original code it would of made it perform a lot better. No matter what though, it will always be slower because it checks 7 extra nodes for each 1 node in A* because it doesn't utilize the target position.

Your benchmark reports 100% success rate, but there are many cases, when there is no way to target on OTS (ex. player and monster are in separate tunnels) and these are the worst - you must calculate path until algorithm hits limit ex. 250 iterations, to be 'sure' that there is no way. Problem with benchmarking 'valid path' 100% of time is that CPU branch and cache predictions will heavily optimize that branch/cache and it may increase speed few (!) times, but in real world with multiple fails it will work slower.

In these tests, which I did the exact same test on both algorithms it is clear the new implementation is a lot faster. The high success rate is actually a testament to how good the pathfinder is. Making the two slower by having fails will not change that, but I can definitely do some tests to show that. The reason why is this: In my implementation I can figure out if a path is not possible in 60-80 iterations which is why I have the 120 iteration limit, it is actually more than it needs to be just because I wanted to give it some wiggle room. In the old version 250 was the limit of closed nodes (not iterations) it could check which limited it to somewhere between 15-20 sqms. The 120 iteration limit allows us to check paths, like in caves that do not link, up to around 30 tiles away. With would make it SEEM slower but in reality it is checking many more nodes. It is actually a place where I could optimize the code even more. I purposefully left out this optimization because I need to see tibias behavior in this area. I know their monsters will only check so far for a path but I am not sure exactly how far that is.

So, yes in the specific case of paths not being found (they are too long to find) but the target is close enough to not make my code ignore the path they will look as though they are the same speed. In reality, again, mine checked 2-3x as many nodes in the same time.

To fix this, all I would have to do is add a check to how far the node we are checking is from our startPos. If the nodes are 20sqm away for example we could just terminate because the path is too long.

I know the algorithm takes the appropriate (not longer) path from testing it on monsters in real world scenarios, and just by knowing how A* finds its paths. It will never choose a path that is 14sqm over a path that is 13sqm.

In most cases you will have two paths that can be correct, depending on how you have implemented the algorithm is how it will decide, but it doesn't matter which one it takes as long as it picks one of the two.

Here is an example:
Djikstras
image

A*
image

Both paths are the same distance to cover. The ONLY argument about A* is that it looks more like a computer made the path, which it did. It doesn't look very monster like to walk into a wall, but the performance cost of not using A* is way too much of a draw back.

It would be also good to print total number of 'steps to target' calculated, because it does not matter, if algorithm is faster, if it calculates longer (not optimal) paths more often.

Check my original pull and the pull made for the memory leak problem. I test distance on paths extensively. The path should always be the same distance as the distance between startPos and targetPos. (maybe - 1) because we don't walk on top of the target. Trust me, I have worked on this for years. I know what is going on here very well.

Lastly, if you want to get a test with more accurate results in all cases add this:
Replace:

if (fpp.maxSearchDist != 0 &&
    (startPos.getDistanceX(pos) > fpp.maxSearchDist || startPos.getDistanceY(pos) > fpp.maxSearchDist)) {
	continue;
}

With:

if (fpp.maxSearchDist != 0 &&
    (startPos.getDistanceX(pos) > fpp.maxSearchDist || startPos.getDistanceY(pos) > fpp.maxSearchDist)) {
	continue;
} else if ((startPos.getDistanceX(pos) + startPos.getDistanceY(pos) >
            Map::maxViewportX + Map::maxViewportY)) {
	break;
}

I added the check when paths have maxSearchDist but not for ones that do not. We need to end those sooner too just not sure when.

I am just giving an opinion here.. take it or leave it... but if you are going to continue to optimize and fine hone this pathfinding algo, might I suggest you research JPS, as it's supposed to be an enhancement to A* which supposedly dramatically increases performance... at least for sure on big grids/nodemaps... it may or may not be worth implementing for our much smaller use cases, I have no idea... another possible optimization I thought of, but never really checked to see if it could be applied, was to limit the distance the node search continues away from the creature based on max viewport.. Anyways, good luck and hope you keep killing it on these work you are doing!

I plan on looking at some of the "better" algorithms for sure. A* is kind of the go to for tibia because we don't calculate z position and the paths are pretty small. Any other algorithm is going to be slower than what you can get from a perfect A* implementation for tibia. (Mine is not perfect. We could use a heap for openSet, and utilize vector indices instead of parent node in AStarNode just as an example to make it a little better even now). However, if we wanted to make a more verbose pathfinder so we could delete a lot of the other game logic like if we can throw an item somewhere that uses z position, going for one of the other algos might be better again maybe not though because the small distance of paths.

I will check out JPS for sure.

You are exactly right about the node thing. I didn't see that before I recommended it to gesior for his tests lol. Nice eye. I purposefully left it out for now because idk tibias actual limit and 120 will only check around 30sqm anyway.

@NRH-AA NRH-AA changed the title Possible pathfinding changes Pathfinding Changes Jun 24, 2025
@gesior
Copy link
Contributor

gesior commented Jun 25, 2025

@NRH-AA
I ran test as in previous optimized pathfinding memory leak fix (each monster on TFS map tries to walk to every position in 10 SQM range: add at start of Game::checkDecay this https://paste.ots.me/564228/text ) and with this PR map.cpp and map.h it CRASHED server!
Tested on Ubuntu 22.04 without vcpkg (http disabled to make it compile). gdb report:

Program terminated with signal SIGSEGV, Segmentation fault.
#0  0x0000556c1268753a in Map::getPathMatching (this=0x556c1297cfd0 <g_game+80>, creature=..., targetPos=..., dirList=..., pathCondition=..., fpp=...) at /home/jskalski/p/prv/ots/forgottenserver-pf/src/map.cpp:834
834			pos.x = found->x;
[Current thread is 1 (Thread 0x7ff7de03c640 (LWP 68084))]
(gdb) bt full
#0  0x0000556c1268753a in Map::getPathMatching (this=0x556c1297cfd0 <g_game+80>, creature=..., targetPos=..., dirList=std::vector of length 25, capacity 32 = {...}, pathCondition=..., fpp=...)
    at /home/jskalski/p/prv/ots/forgottenserver-pf/src/map.cpp:834
        dx = 18255
        dy = 0
        pos = {x = 20736, y = 0, z = 13 '\r'}
        startPos = {x = 139, y = 485, z = 13 '\r'}
        distanceX = 10
        distanceY = 6
        maxDistanceX = 22
        maxDistanceY = 22
        allNeighbors = {_M_elems = {{first = -1, second = 0}, {first = 0, second = 1}, {first = 1, second = 0}, {first = 0, second = -1}, {first = -1, second = -1}, {first = 1, second = -1}, {first = 1, second = 1}, {first = -1, 
              second = 1}}}
        sightClear = false
        endPos = {x = 128, y = 480, z = 13 '\r'}
        nodes = {nodes = std::vector of length 0, capacity 308, nodeMap = std::unordered_map with 0 elements, visited = std::unordered_set with 0 elements, openSet = std::priority_queue wrapping: std::vector of length 0, capacity 0}
        found = 0x7ff7d81c949000
        bestMatch = 0
        iterations = 120 'x'
        n = 0x7ff7d865ba40
        prevx = 20736
        prevy = 0
#1  0x0000556c1279ec4b in Creature::getPathTo (this=0x7ff7d93f4f10, targetPos=..., dirList=std::vector of length 25, capacity 32 = {...}, fpp=...) at /home/jskalski/p/prv/ots/forgottenserver-pf/src/creature.cpp:1498
No locals.

EDIT:
I ran all monsters with default Creature parameters passed to fpp (min/max distance set to 1 = go to target position):

	FindPathParams fpp;
	fpp.fullPathSearch = true;
	fpp.clearSight = true;
	fpp.maxSearchDist = Map::maxViewportX + Map::maxViewportY;
	fpp.minTargetDist = 1;
	fpp.maxTargetDist = 1;

before merging this PR, someone should also test how monsters that keep distance work after changes ex. Warlock.

@NRH-AA
Copy link
Contributor Author

NRH-AA commented Jun 25, 2025

pos = {x = 20736, y = 0, z = 13 '\r'}
dx = 18255
prevx = 20736

startPos = {x = 139, y = 485, z = 13 '\r'}
endPos = {x = 128, y = 480, z = 13 '\r'}

What is this?

It looks like you made a creature that is 20k tiles away from the path it generated.

@NRH-AA
Copy link
Contributor Author

NRH-AA commented Jun 28, 2025

Alright, sounds like it is good enough then. If it stops the crash then let’s roll with it.

@gesior
Copy link
Contributor

gesior commented Jun 29, 2025

It should not affect performance much or at all.

It affects performance. It's 25% slower with 'new', but IDK how to make it work without 'new' and compile with address sanitizer/on Ubuntu 22.04.
Version with 'new' is still almost 2 times faster than old algorithm, so we should merge this PR as crash/memory leak fix and we can work on removing 'new' and other optimizations later.

@NRH-AA
Copy link
Contributor Author

NRH-AA commented Jun 29, 2025

It should not affect performance much or at all.

It affects performance. It's 25% slower with 'new', but IDK how to make it work without 'new' and compile with address sanitizer/on Ubuntu 22.04. Version with 'new' is still almost 2 times faster than old algorithm, so we should merge this PR as crash/memory leak fix and we can work on removing 'new' and other optimizations later.

I think we can add a null check inside NodeCompare to fix the crash before your change to fix it.

If you have time could you try this? In the meantime we can push this.

it seems the issue is we are removing nodes; somehow at the same time they are being compared resulting in a null reference.

struct NodeCompare
	{
		bool operator()(AStarNode* a, AStarNode* b) const
		{
                         if (!a) return false;
                         if (!b) return true;
			return a->f > b->f; // Min-heap based on f score
		}
	};

@NRH-AA NRH-AA requested a review from ArturKnopik June 29, 2025 19:09
@NRH-AA
Copy link
Contributor Author

NRH-AA commented Jun 30, 2025

I had it calling nodes.clear() too early which was probably causing the invalid memory access.

@NRH-AA
Copy link
Contributor Author

NRH-AA commented Jun 30, 2025

@gesior We still need to increase the allocated size * 7 wasn't enough. I picked that number "randomly". I figured it was close to enough to handle any paths we were trying to get. That is actually the only problem that was happening.

@gesior
Copy link
Contributor

gesior commented Jun 30, 2025

@Shawak
Yes. It's problem with reallocating vector after reaching reserved size.

What I did to find out real problem is described at end of this comment.

I tested, if increasing nodeReserveSize size by 10 times fixes problem. It worked, but it's not a real solution.
Real solution would be to stop adding new nodes after reaching nodes.size() and that's what I did.
Only problem is that we set limit of nodes to constant value. Most complicated paths can't be calculated, because algorithm reaches limit of nodes and cannot increase it.
In tests code with new found 138.833 paths, code with heap allocation and limit of nodes found 138.238 paths (0.5% less), but code with 'new' is 24% slower, so I don't think that extra 0.5% found paths is worth +25% CPU usage.

We still need to increase the allocated size * 7 wasn't enough

7 is pretty good number.
Number of paths found and time to calculate all paths in my TFS map benchmark with distance up to 12 SQM x/y (7 as base for +/- %):

  • 5: 164872 paths (-3.5%), 4895 ms (-16% CPU)
  • 6: 168892 paths (-1.2%), 5308 ms (-9% CPU)
  • 7: 170923 paths, 5833 ms
  • 12: 173231 paths (+1.3%), 7556 ms (+29% CPU)
  • 50: 173687 paths (+1.6%), 8947 ms (+53% CPU)
  • 500: 173687 paths (+1.6%), 10372 ms (+78% CPU)

Before this PR (optimized with memory leak):

  • 150.617 paths (-11.9%), 13058 ms (+123% CPU)

Before optimized path finding:

  • 169935 paths (-0.5%), 13322 ms (+228% CPU)

So with * 7 and this PR code, we get 0.5% more paths found than before 'optimized path finding' and it uses 56 % less CPU.
Nice optimization @NRH-AA !


What I did to find out real problem:

I wrote simplified version of this problem:
https://paste.ots.me/564233/text
Most important part is:

int main(int argc, const char** argv)
{
	AStarNodes nodes(1, 1);
	AStarNode* node1 = &nodes.nodes.back();
	std::cout << "n1:" << node1->x << "," << node1->y << std::endl;
	AStarNode* node2 = nodes.createNode(node1, 1, 2);
	std::cout << "n1:" << node1->x << "," << node1->y << std::endl;
	return 0;
}

You can run it with g++ -fsanitize=address test.cpp && ./a.out or here:
https://cpp.sh/
Result is (node1 values are modified by emplacing second element to nodes, so it really frees that address):

n1:1,1
n1:511,0

but if you uncomment:

//	nodes.reserve(2);

it will return valid result:

n1:1,1
n1:511,0

but if you add 3rd element in main, it will return invalid results again. Problem appears only when you add more elements to nodes than is reserved.

Code that returns invalid results is always detected with -fsanitize=address.

@NRH-AA
Copy link
Contributor Author

NRH-AA commented Jun 30, 2025

@Shawak Yes. It's problem with reallocating vector after reaching reserved size.

What I did to find out real problem is described at end of this comment.

I tested, if increasing nodeReserveSize size by 10 times fixes problem. It worked, but it's not a real solution. Real solution would be to stop adding new nodes after reaching nodes.size() and that's what I did. Only problem is that we set limit of nodes to constant value. Most complicated paths can't be calculated, because algorithm reaches limit of nodes and cannot increase it. In tests code with new found 138.833 paths, code with heap allocation and limit of nodes found 138.238 paths (0.5% less), but code with 'new' is 24% slower, so I don't think that extra 0.5% found paths is worth +25% CPU usage.

We still need to increase the allocated size * 7 wasn't enough

7 is pretty good number. Number of paths found and time to calculate all paths in my TFS map benchmark with distance up to 12 SQM x/y (7 as base for +/- %):

    • 5: 164872 paths (-3.5%), 4895 ms (-16% CPU)
    • 6: 168892 paths (-1.2%), 5308 ms (-9% CPU)
    • 7: 170923 paths, 5833 ms
    • 12: 173231 paths (+1.3%), 7556 ms (+29% CPU)
    • 50: 173687 paths (+1.6%), 8947 ms (+53% CPU)
    • 500: 173687 paths (+1.6%), 10372 ms (+78% CPU)

Before this PR (optimized with memory leak):

  • 150.617 paths (-11.9%), 13058 ms (+123% CPU)

Before optimized path finding:

  • 169935 paths (-0.5%), 13322 ms (+228% CPU)

So with * 7 and this PR code, we get 0.5% more paths found than before 'optimized path finding' and it uses 56 % less CPU. Nice optimization @NRH-AA !

What I did to find out real problem:

I wrote simplified version of this problem: https://paste.ots.me/564233/text Most important part is:

int main(int argc, const char** argv)
{
	AStarNodes nodes(1, 1);
	AStarNode* node1 = &nodes.nodes.back();
	std::cout << "n1:" << node1->x << "," << node1->y << std::endl;
	AStarNode* node2 = nodes.createNode(node1, 1, 2);
	std::cout << "n1:" << node1->x << "," << node1->y << std::endl;
	return 0;
}

You can run it with g++ -fsanitize=address test.cpp && ./a.out or here: https://cpp.sh/ Result is (node1 values are modified by emplacing second element to nodes, so it really frees that address):

n1:1,1
n1:511,0

but if you uncomment:

//	nodes.reserve(2);

it will return valid result:

n1:1,1
n1:511,0

but if you add 3rd element in main, it will return invalid results again. Problem appears only when you add more elements to nodes than is reserved.

Code that returns invalid results is always detected with -fsanitize=address.

There is 2 things that need to be fine tuned.

image

Nodes: 227 Iterations: 161
Nodes: 191 Iterations: 141
Nodes: 183 Iterations: 135
  1. The allocated size has to be correct for how many iterations are allowed in the algorithm.
if (iterations >= Map::nodeReserveSize) {
	return false;
}

I had this which is part of why it broke. I wanted to make it easy to increase what size paths can be drawn by just changing:

static constexpr int32_t maxViewportX = 11; // min value: maxClientViewportX + 1
static constexpr int32_t maxViewportY = 11; // min value: maxClientViewportY + 1

The issue is the amount of iterations can't be the same size as the reserveSize otherwise it will always overflow. So it should be changed to:

if (iterations >= (Map::maxViewportX + Map::maxViewportY) * 7) { // 7 so it can check all directions for a path
	return false;
}
  1. The allocated size has to handle that many nodes. The worst case scenario is iterations * 7 (if we have to check every node on the worst possible path) but that should end way before based on the distance check on the node to the end position that we added. So to be safe we should do:
    (Map::maxViewportX + Map::maxViewportY) * 10)

Yes, your solution will definitely stop any crashes which is perfect, but this is where I left off before I took my vacation. I didn't think about the fact the allowing the iterations to be constant with the reserve size would cause this problem. It still needs to be addressed as it is unstable. The iterations should be used as a way to stop paths that are too large.

@gesior
Copy link
Contributor

gesior commented Jun 30, 2025

@NRH-AA
I've done my fix. Now it does not crash and works super fast.
You can try to fine tune it, but results with 7 are already better than before path finding optimizations (+0.5% more paths found).
Maybe make 2 variables in map.h. One to limit iterations, second to limit nodes reserve.
Also remember to change uint8_t iterations = 0; to uint32_t, if you want to test it with higher number of iterations.

IDK what is wrong on that screenshot. Algorithm is not supposed to find a way out from any size of labyrinth. It must limit CPU usage by single monster. Also, walking too far, could make monster walk out of player screen (no players on screen) and make it go idle.

Old algorithm (TFS 0.x) had number of iterations - in our case also nodes reserved - set by variable passed to getPathMatching, so it was able to run different algorithm complexity in different parts of engine. Ex.

  • when player use distance item(server must walk player to that item pos): run 500 iterations (we want good results)
  • when monster run to player: run 100 iterations (save CPU)
    • if all 'run to player' of given monster failed in last second, run 500 iterations (monster is in some labyrinth, find way to player using more CPU)

That way normal path finding of monster uses less CPU, but if it fails to find path, it can run more complicated path finding.

@gesior
Copy link
Contributor

gesior commented Jun 30, 2025

@NRH-AA
With commit Adjust iterations and reserve size code is 8% faster than with previous commit * 7. It failed in 2 extra path calculations out of 404k paths (paths found before commit 170923, after 170921), but it still returns paths in +0.5% cases (old algorithm 169935).

Right now optimized algorithm uses 60% (!) less CPU than old algorithm (5334 ms vs 13322 ms).
There is no memory leak, no crashes and no problem with compilation.

Can we merge this PR? @ranisalt @ArturKnopik

@ArturKnopik
Copy link
Contributor

ArturKnopik commented Jun 30, 2025

Tomorrow I will review this changes and test it, if everything is ok I will merge

@ArturKnopik
Copy link
Contributor

ArturKnopik commented Jul 1, 2025

Tested for a while, looks like it works, code also looks ok, in case of any problem we can revert this changes

@ArturKnopik ArturKnopik merged commit d9a4e0f into otland:master Jul 1, 2025
16 checks passed
@Codinablack
Copy link
Contributor

@NRH-AA For the record JPS is not a different algo than A*, it's an enhancement for A*, one that keeps it from iterating over already known bad paths/nodes... which, if I read this PR correctly (the comments not the code), you already figured it out that such an alteration would indeed be a good optimization for this algo.

@NRH-AA
Copy link
Contributor Author

NRH-AA commented Jul 1, 2025

@NRH-AA For the record JPS is not a different algo than A*, it's an enhancement for A*, one that keeps it from iterating over already known bad paths/nodes... which, if I read this PR correctly (the comments not the code), you already figured it out that such an alteration would indeed be a good optimization for this algo.

I made sure to ignore nodes we don't need to check again but JPS would work a little differently. I think JPS works best if you already have the path data. (like google maps) We know all routes that can be taken which can be used to ignore routes we know will end up in a bad place. We can also jump to certain routes if we know that a node at this x,y that is found for a path going north or whatever will always come out at a specific spot. So maybe 100 paths all use that same route. That would be a hot spot and we could automatically grab the nodes required and just jump to that point and start the path finding again. Doing this multiple times would for sure increase performance in that case, but outside knowing the paths before hand I don't think its plausible.

@gesior
Copy link
Contributor

gesior commented Jul 1, 2025

I made sure to ignore nodes we don't need to check again but JPS would work a little differently

I did not analyse JPS, but wikipedia says In computer science, jump point search (JPS) is an optimization to the A* search algorithm for uniform-cost grids. It looks like it tries to 'walk' X tiles on straight lines to reduce CPU usage, but it won't work on TFS.
It means that JPS works only for uniform-cost grids, but TFS has different costs on each step, as there are:

  • tile ground speed (ground item)
  • direction (diagonal is more expensive)
  • magic fields (fire/energy/poison fields etc.) for monsters that are not attacked
  • remove creature (ex. Demon destroys his own summon of Fire Elemental)

so JPS won't work.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Bug]: Fleeing creatures aren't properly taking distance steps

6 participants