Monday, 4 July 2011

More Freeciv AI thoughts

I'm busy killing time with a solo game, and I have what superficially looks like a very good deal:

The economics report suggests that Adam Smith's Trading Company would save me 43 gold per turn, in perpetuity. It's in perpetuity regardless of when the savings start, so since the Company would be ready in 34 turns anyway due to the city's production capacity, buying it would save me 33 turns.

43 gold over 33 turns 1419 gold, and gold doesn't depreciate, so this is definitely a potential arbitrage opportunity: spend 680 now to save 1419. My kitty is sufficiently full (4026 gold), and it's a solo game, so I'm not too worried about opportunity costs - losing a war for want of 680 nails would be such a cost.

To figure out what interest rate discounts those 43 * 33 savings to 680 gold now, I head over to my online financial calculator, and do a manual pseudo-binary search on the interest rate. 61% per "year" just about nails it, which translates to 5.1% per turn.

Is my civilization growing, in some sense, at 5.1% per turn? The gold kitty (and the science bulbs kitty too, for that matter) don't attract interest the way money in the bank might. A Freeciv AI agent might not actually have an easy way to determine an interest rate - it would have to weigh many individual decisions at every turn in order to have an idea. My agent-based Freeciv AI does have a notion of "prevailing interest rate", but has no mechanism to determine what prevails. My current thinking is to list each possible decision, according to required discount rate in order for it to be worthwhile, then do all the most worthwhile things until there's no money, no time, or no movement left. The last such action would probably be the most definitive of the single civilization-wide risk-free rate of return, but by then that number itself would no longer be interesting.

There are complications, and I'm not sure how to resolve them. Firstly, actions are not independent. My AI breaks tasks down into subtasks, and these subtasks can sometimes serve multiple supertasks. (This is code in progress - I have no code collateral yet.) For example, if I need to build a road to a port city, and also to colonize another continent, building one settler as a subtask serves both supertasks: build the road to port, then get on a boat. Each supertask might individually be very low down on the list of worthwhile things to do, but sharing the subtask might push them to near the top. Another concern is quite straightforwardly related to SAT: each task claims some subset of resources, making them unavailable for others.

I'm unsure of how to address these complications; for now my best guess forward is perhaps Monte Carlo style AI: randomly choose some set of tasks to attempt, sort according to risk-free rate of return, and repeat. If any task consistently shows up among the winners, it is likely to be a good idea, so commit to that. Perhaps repeat the procedure on the remaining tasks, or just be lazy and pick a few more runners-up.

1 comment:

  1. I understood most of the words in this post ... but that's about it. I can see why good AI is tough!