I made a decision. It wasn't a very important decision, it was in a game, and I made it somewhat thoughtlessly, but I want to analyze it in order to understand how to make decisions in general.
I had a pile of recyclables - note that there's abstraction here, each item in the pile had a lot of details to it, what kind of thing it was, how damaged it was, what kind of mats I could get out of it if I chose to recycle it rather than repair it. I could have recycled everything immediately and I would have gotten some mats. The mats don't have as much detail as the recyclables, they're more fungible, but I'm still abstracting a bit to characterize them as 'a pile of mats'. Instead I decided to schlep the recyclables over to a different station, which has a bonus for recycling efficiency, and then schlep the resulting mats back to the original station.
In doing that, I'm using a fixed asset (the robot that I was schlepping the stuff inside) and I was spending some of my own time. Was the decision worth it?
I think it's useful to analogize double-entry bookkeeping (which is not an enormously awesome idea, just a moderately useful and moderately confusing one) to event sourcing. Event sourcing emphasizes that an abstract data type can be isomorphic to a free data type modulo an equivalence relation. The free data type is something like a log of every command. Then the equivalence relation indicates which sequences of commands end up at equivalent points - such as pushing an element followed by popping that element, which might be equivalent to doing nothing at all.
Event sourcing allows you to change what the rules are, and figure out where you would be if you had been using those rules all along. The system of one 'everything' log, which has a (perhaps wordy) description of what the transaction really is, and calls out all the accounts touched, and account-specific logs that point (perhaps with a transaction number) back to the 'everything' log is maintaining indexes.
Then there's a sortof squashing step that compresses each account-specific log into a concise status of that account. Then there's a utility function which takes the statuses of all the accounts and gives a utility, a number. Usually you can partially differentiate the utility function with respect to each account and get a 'price'. If you multiply the prices by the statuses then you get a number which could be called the 'value stored in that account', but it doesn't necessarily have anything to do with either the amount you spent to get that status (that's the sunk cost fallacy) nor does it necessarily have anything to do with the amount you could get on the market by liquidating the underlying goods representing that account. If we believe that the account measures something of purely instrumental value, of no intrinsic worth, simply useful to achieve other things that we actually value, then it may have something to do with future utility flows associated with that account.
So in this decision, there might be six accounts:
There might be six accounts.
- recyclables in location 1
- mats in location 1
- recyclables in location 2
- mats in location 2
There might be transactions like:
- take recyclables in robot to location 2 (decrease recyclables in location 1, increase recyclables in location 2, record use of robot, record time spent)
- recycle (decrease recyclables in location 2, increase mats in location 2)
- take mats in robot to location 1 (decrease mats in location 1, increase mats in location 1, record use of robot, record time spent)
And the alternative sequence of transactions is just the single transaction of doing recycling the stuff in location 1. If we stack those three transactions together, cancelling where possible, we can either derive something about the utility function (in order for it to ratify the choice) or we can praise or blame the choice (assuming the utility function is constant).
In the first case, the value in utility of the excess mats from the more-efficient recycling would have to be equal to or greater than the value of the time spent. If the utility function is constant, then we might be able to say that the choice was a loss (for example, spending too much time for too little a return). If the utility function is constant and the choice was a good one, then we could allocate the excess utility to the robot's account. By logging the excess utility enabled by the robot, assuming that the future will be similar to the past, and discounting the future flows of utility, then we might generate a net-present-value figure for the robot that would enable us to evaluate deals that include losing the robot.