slithy

the relation between mathematics and mathematical logic

Okay, so mathematicians are awesome, and I wish I was one. Also, they do lots of different things - some of them work with numbers, others with linear transformations, others with partial orders, manifolds, permutations, braids, categories, sheaves, all kinds of things. One of the side fields (kind of a backwater, really) is mathematical logic, also (pompously) called "foundations of mathematics". This makes it sound important, but think of it like the relationship between biology and physics. If a biologist makes a discovery about the embryonic development of a nematode, then that's great. It has an explanation in terms of physics, to be sure, but the biologist can work and make progress quite independently of the physicists. The biologist's discovery must be compatible with the physicists' theories - but it's not the biologist's job to make it compatible. If it's anyone's job, it's the physicists' job - their theories had better be compatible with the observed biology.

Inside of mathematical logic, there are philosophical positions, and people like me, who follow these things, get peculiarly emotionally attached to various positions. At the moment, a moderately high-status person-on-the-internet is claiming that second-order logic is NECESSARY for us to talk about the integers. Clearly, I am convinced that they are wrong-on-the-internet. In working out my (irrational, ridiculous) emotions, I am writing a blog post.

boring beyond beliefCollapse )
slithy

learning to program

One of the first things that you need to do if you want to learn to program is to look at the connections between syntax and trees and nested parentheses and semantics. We speak to one another (and write to one another) so fluently that it's not obvious that natural language is in some important sense tree-shaped. Someone who has never studied linguistics might be distracted by the (less important) sense in which it is line-shaped - in time (if we are speaking or gesturing) or possibly space (if we are writing).

The tree diagram is nice to have in mind, but it is not terse - it takes a huge amount of visual space. (Oriented) trees are isomorphic to sequences of balanced parentheses - ((()())(()())) is like a tree with seven nodes. If we have words labeling each node of the tree, then on the balanced parentheses side we get something like this: top(left(ll(), lr()), right(rl(), rr())).

It is nice to be able to walk back and forth. One direction is from ordinary language to trees to balanced-parentheses representing trees in order to generate (a starting point for) code. The other direction is from at-first-inscrutable formal text to trees and from trees to some natural-language pronunciation of the formal text in order to read and have at least a stab at understanding code that someone else wrote.

After that basic step, there are a lot of "next steps". One important next step is learning to learn - how to search for information in an archive, how to learn how something works by experimenting with it, how to feel more subtle emotions about your own ignorance than simple despair or simple overconfidence. Another important next step is learning about how the world is put together - How did this text get onto this screen? How are parsers typically constructed? How does the internet work? What is a von Neumann computer? A third important next step is learning abstraction techniques. The first abstraction is procedures - I think staying away from objects and classes is a good idea for a while (and there are plenty of other abstractions beyond those). However, it might be that all of those can be learned "accidentally" while trying to complete various programming tasks.

It's sad that there are not more diverse fun learn-to-program domains; there is drawing, including subvariants of turtle graphics and vector graphics similar to PostScript or Processing. There is the console interface. There are innumerable kinds of tank battles. There is Core Wars. I don't know of any fun database-backed-business domains, even though they would be really educational. There don't seem to be lots of bioinformatics or nano/biotech oriented games - there's FoldIt and Organic Builder and SpaceChem.
slithy

interfaces in game design

In some games, prominantly collectible card games like M:tG,
and deck-building card games like Dominion,
but also fragments of older games such as Monopoly's Chance and Community Chest,
the player needs to read and interpret (obey)
sentences and paragraphs printed on the cards as rules.

This is in contrast to games such as Chess,
which may have rules, but during play,
the players do not generally read and interpret the rules.

In a recent RPG such as WoW or Torchlight,
these blocks of text are printed on the player's spells and abilities,
as well as their equipment.
A significant fraction of the RPG experience is 'character building',
where the player considers these rules, and particularly their synergies,
when they are choosing among several spells or abilities to invest in,
and when choosing among several pieces of equiment.
This aspect is closely analogous to deck-building in a collectible card game.

In modeling a game with this structure,
you may want to abstract away from the actual paragraphs printed,
(in a CCG like M:tG, the owner/operator of the game will want to
continuously print new paragraphs,
and focus instead on the interface
in between the paragraphs.

It can be difficult to figure out what the interface is.
One test for whether something is part of the interface,
is whether it is universal, across all decks or classes or races.
In order for things to 'plug together', they need to be universal.

A sword says it has "+1 str".
Is it the case that every character has a 'str'?
Then str is part of the interface.

A sword says it does '+10 fire damage'
Is it the case that every blow has damage?
Is it the case that every bit of damage can be categorized as one of 'physical, fire, lightning, poison'?
That's part of the interface.

A ability says it does '+10 fire damage to water-type mosters'
Is it the case that every monster can be categorized as one of Foo, Bar, Baz types? Then that's part of the interface.

In a collectible card game, there is often a standard layout
for every card, or a few different standard layouts for different categories.
There might even be an explanation of the standard layout
(this is where the cost is printed, this is where the offensive stat is
printed, this is where the health stat is printed, et cetera).
That standard layout is filld with universals and therefore probably
valuable for the interface.

Monopoly Chance and Community Chest interface, with the universals:
Pay $ / Collect $ / Pay Each Player $ / Collect $ from each player (Every player has a stock of cash)
Advance to Nearest (Every player has a token at some specific place)
"Advance to nearest railroad and pay owner
twice the rental that they are otherwise entitled,
if unowned you may buy it from the bank." (Every property is either unowned,
or owned by some specific player. Every owned property has a rental cost.)
slithy

incoherent gesturing regarding utility functions and accounting

Okay, let's assume that we understand the difference between an inferred utility function (e.g. such and such robot in fact destroys blue things therefore its utility function has a negative coefficient for 'blue things in the world'), and a utility function component in a system design (e.g. here is where the utility function is stored). We might call the latter the "utility function design pattern", and we might find examples of it in control systems where the target state, and the distance from the current state to the target state, are both actual boxes on the blueprint.

If the system that you're building is moderately large, then you might, as a first order approximation, express the utility of the current state as an additively separable function of the states of the subcomponents.

Utility is funny stuff, and these numbers attached to subcomponents are extra funny because they add up to an approximation. One funny thing about utility is that it doesn't have natural units, like charge. If you multiply your utility function by a constant, the new function will guide you to the same decisions. The accountants use units of cash, but they don't really mean what you might think it means - if the 'raw materials' account has $1m in it they don't really mean that you could acquire equivalent raw materials for $1m, nor do they mean that you could dispose of those raw materials in a hypothetical bankruptcy auction for $1m. There is a "net present value of future cash flows" interpretation of that number that sortof is justified, and I'll explain that in a bit, but for now I think it's easier to think of it as a convention that most businesses have a subcomponent of their operation which is "cash on hand", and utility functions have a loose degree of freedom, and pinning that loose degree down so that the coefficient of "utils per dollar" is 1 is conventional.

In double-entry accounting, there's a rule regarding conservation of utility. If an entry is purely internal to the entity doing the accounting, no interaction with the market, then utility is supposed to be conserved. This actually makes sense from a reinforcement-learning perspective. The terminal value for firms is cash. We assume the Bellman equation for backpropagating reward coefficients is satisfied, then a purely-internal transaction just moves value / utility / reward around.

The Bellman equation explains the "net present value of discounted cash flow" that I mentioned before - since cash flow corresponds to reinforcement learning reward for firms, if everything is working right, and we're doing discounting rather than episodic learning, then we can relate the utility of the present state through perhaps many iterations of Bellman backprop to a sequence of future cash flows. However, because of the distance of the inference, and because the accounts are just a first-order approximation, it's not always a great idea to connect the number in the account to any particular anticipated sequence of cash flows.

If you wanted to generalize double-entry accounting to deal with solitaire scenarios (such as Minecraft), then you would need to identify what your terminal values for the scenario actually are. If you only value gold, then all of your accounts, your tools and weapons, your fortifications and fixed assets, can be denominated in gold. If you actually intrinsically value lots of things - how many verbs you have available (freedom), how many different models you see (art content), how deep or high you've dug or built or visited (achievements), and so on - then you probably should have accounts denominated in something like 'utils', rather than choosing to use gold or silver (because real-world accountants always use dollars or the local currency, and gold and silver are the best in-game approximation to currency?).

More interestingly, even in games like SimCity or Eve that have an explicit in-game entity called "money", if you don't simply want more and more money but instead want to play in a sandboxy (yet optimized) way, then you probably shouldn't denominate your internal accounts in "money". Instead, you will have an account called "money", that holds utils.

One of the funny things about accounts as a first-order approximation of your actual utility function is price changes. Even if the state of a subcomponent (account) stays the same, if circumstances change then the utility contributed to your total utility by that subcomponent (account) may change, perhaps wildly. One way to you might explain this is that you have an internal price as well as an internal "inventory", and the price changed, even though the inventory didn't. The inventory is an abstraction (just like the utility was an abstraction), and the price is more of a coefficient than anything set by a market.

Gah. I am too vague.
slithy

what I did today

There was a problem, and the microfeature that I was supposed to have done sometime last week turned out not to be working. So I investigated, and (re)figured out that the intended functionality was that message would be received at a particular class, but that method was polymorphically overridden by another method in a subclass, which delegated to a different method on the same class, which was polymorphically overridden by a subclass, which delegated to an interface, which had a default implementation that did nothing, and two subclasses below the interface the method that looked ALMOST like it was overriding the default implementation wasn't ACTUALLY overriding it, so (understandably) the signal was not emitted and the slot was not called and the slot did not delegate to the widget that, had it been notified, would have actually done what it was supposed to do.

Perhaps programmers ought learn a language that has INTERCAL's COMEFROM instruction, soley in order to understand how misuse of inheritance can be a bad thing.

If this reminds you of Heath Robinson and/or Rube Goldberg, you are correct. What is perhaps sad is that ridiculous layering and indirection and redundancy is actually how most software works - so far as I can tell, not having examined all software everywhere - admittedly, this is because to get almost anything done, a causal chain needs to traverse a huge number of ossified organizational barriers. The widget library, application, networking library, kernel, and service are probably all written by separate organizations, and it is only by looking at a particular transverse chain of events (as you might if you were debugging) that you can perceive the ridiculousness that is happening EVERY SECOND.

However, even within one fairly small team, accidental complexity can cause those ossified barriers and ridiculous chains, if it is not assiduously cleaned up. Sigh.
slithy

opposite of a meme

So memes, genes, and chain letters are a kind of thing, a message of some sort that may in the right host environment provoke a chain of events that yields more copies of that thing.

Individual humans are partly descendents of memes; I've previously argued that we should not identify solely with our genetic heritage and neglect (more chronologically rapid, but vastly higher-bandwidth) our memetic heritage.

There doesn't seem to be a word for what aspect of a human is entirely non-memetic - meaning also non-genetic.

All of life exists only outside of thermodynamic equilibrium - we are mostly powered by the sun, but if there were a thin layer of mucky fluid in between a slightly hotter surface and a slightly cooler surface, then life could evolve to take advantage of that heat flow. If the surfaces were the same temperature, but they were moving steadily relative to one another, then life would evolve to take advantage of that form of energy.

There are other non-equilibrium dynamics that have some qualities of life - convection cells or vortices (basically the same thing, just the direction you're looking at it from). Storms are powered by the earth's surface being hotter than the sky, and a cell is a column of rising hot air that spreads out and falls in a circular curtain of cool air. Cells have "metabolism", and they're autocatalytic - self-creating, as well as catalyzing the formation of nearby cells. So the opposite of a meme might be a vortex, or a stand-alone metabolism.

I think corporations have a metabolic aspect. If you take the complexity of a corporation to include the complexity of the humans working for it, they're of course more complicated than single humans. But if you take the complexity of the framework of corporate policies that would stay the same if somehow there was very high personnel turnover for a short burst, and every employee and owner swapped out for someone else, then they're complicated (among the most complicated things that we humans have ever built), but they're not impossibly complicated - comparable to a medium-large piece of software, perhaps.

If we take Star Trek as a starting point, and imagine that spaceships correspond to businesses / metabolisms / autocatalytic storm cells, and that the people beaming in and out of the starship correspond to memes, what kind of vision of the future do you get? One difference is that beaming isn't a kind of motion - it's a kind of copying. Essentially every time someone is beamed, they're duplicated. Another aspect is that people do not really grow or change - memory is probably something that the starship as a whole has, not a meme. A message stays the same message, even if (by combining it with some other premise) you reach a conclusion which is distinct.

A major activity of the starship might be critical thinking - accepting some visitors who beam onboard, and generally trying them out. Some of these visitors might be manipulative or deceiving - trying overtly or subtly to harm the starship as a whole. Others of these visitors might become valued members of the crew.

This analogy meshes with Greg Egan's vision of a polis (in his novel Diaspora), but makes it a little more clear that the primary reproductive entity in this future is the starship / polis, not the memes that inhabit it - they don't really know how to autocatalyze on their own.
slithy

(no subject)

I made a decision. It wasn't a very important decision, it was in a game, and I made it somewhat thoughtlessly, but I want to analyze it in order to understand how to make decisions in general.

I had a pile of recyclables - note that there's abstraction here, each item in the pile had a lot of details to it, what kind of thing it was, how damaged it was, what kind of mats I could get out of it if I chose to recycle it rather than repair it. I could have recycled everything immediately and I would have gotten some mats. The mats don't have as much detail as the recyclables, they're more fungible, but I'm still abstracting a bit to characterize them as 'a pile of mats'. Instead I decided to schlep the recyclables over to a different station, which has a bonus for recycling efficiency, and then schlep the resulting mats back to the original station.

In doing that, I'm using a fixed asset (the robot that I was schlepping the stuff inside) and I was spending some of my own time. Was the decision worth it?

I think it's useful to analogize double-entry bookkeeping (which is not an enormously awesome idea, just a moderately useful and moderately confusing one) to event sourcing. Event sourcing emphasizes that an abstract data type can be isomorphic to a free data type modulo an equivalence relation. The free data type is something like a log of every command. Then the equivalence relation indicates which sequences of commands end up at equivalent points - such as pushing an element followed by popping that element, which might be equivalent to doing nothing at all.

Event sourcing allows you to change what the rules are, and figure out where you would be if you had been using those rules all along. The system of one 'everything' log, which has a (perhaps wordy) description of what the transaction really is, and calls out all the accounts touched, and account-specific logs that point (perhaps with a transaction number) back to the 'everything' log is maintaining indexes.

Then there's a sortof squashing step that compresses each account-specific log into a concise status of that account. Then there's a utility function which takes the statuses of all the accounts and gives a utility, a number. Usually you can partially differentiate the utility function with respect to each account and get a 'price'. If you multiply the prices by the statuses then you get a number which could be called the 'value stored in that account', but it doesn't necessarily have anything to do with either the amount you spent to get that status (that's the sunk cost fallacy) nor does it necessarily have anything to do with the amount you could get on the market by liquidating the underlying goods representing that account. If we believe that the account measures something of purely instrumental value, of no intrinsic worth, simply useful to achieve other things that we actually value, then it may have something to do with future utility flows associated with that account.

So in this decision, there might be six accounts:

There might be six accounts.

  1. recyclables in location 1

  2. mats in location 1

  3. robot

  4. time

  5. recyclables in location 2

  6. mats in location 2


There might be transactions like:

  1. take recyclables in robot to location 2 (decrease recyclables in location 1, increase recyclables in location 2, record use of robot, record time spent)

  2. recycle (decrease recyclables in location 2, increase mats in location 2)

  3. take mats in robot to location 1 (decrease mats in location 1, increase mats in location 1, record use of robot, record time spent)


And the alternative sequence of transactions is just the single transaction of doing recycling the stuff in location 1. If we stack those three transactions together, cancelling where possible, we can either derive something about the utility function (in order for it to ratify the choice) or we can praise or blame the choice (assuming the utility function is constant).

In the first case, the value in utility of the excess mats from the more-efficient recycling would have to be equal to or greater than the value of the time spent. If the utility function is constant, then we might be able to say that the choice was a loss (for example, spending too much time for too little a return). If the utility function is constant and the choice was a good one, then we could allocate the excess utility to the robot's account. By logging the excess utility enabled by the robot, assuming that the future will be similar to the past, and discounting the future flows of utility, then we might generate a net-present-value figure for the robot that would enable us to evaluate deals that include losing the robot.

Sigh.
slithy

heft and customer experience

As you probably know, people frequently pay more for a product that they can hold, that they enjoy holding, than for a more "virtual" product. For example, Apple makes a lot of money selling physical objects such as iPads and iPhones and pays a lot of attention to the "feel" of the interaction - the swiping and so on. By contrast, products produced by Apple ecosystem developers, apps and eBooks, are remarkably difficult to make money on - some of this is of course, Apple extracting money from the ecosystem, but there's also something about customer's purchasing behavior and what they perceive as worth paying money for.

Another similar (in my mind) example is teleportation in massively multiplayer online games - it is a comparatively easy feature to develop or add to a game, and the existing playerbase would certainly ask for it and use it, but many games do not have teleportation. I believe this is because they believe that teleportation would damage the "feel" of the virtual world, making it more similar to experiences such as browsing the web or instant messaging, which (though compelling) are not easy to monetize. Fewer people would join or become committed to the game, and existing players would more quickly leave to other activities.

Let's call this quality with those two examples "heft". You might prefer a tool that has a nice solid 'snick' when it opens or operates; that's an example of heft. My question is - to what extent is pursuing heft virtuous? Is it reasonable to justify pursuing heft as 'This is one of many things that people enjoy and I am providing a good customer experience.'? Or is heft a kind of cognitive bias, like hyperbolic discounting? I certainly think that taking money from someone who is cognitively disabled is reprehensible, and we are all cognitively disabled by comparison to big well-funded organizations composed of professionals who work closely with computers.

People nowadays have relationships with software-as-service businesses, businesses comprised of relatively few humans, and relatively many computers (in big datacenters which may or may not be owned and operated by the customer-facing business). When I say 'relatively few humans' I mean the number of customers is so much higher than the number of employees that it would be infeasible for the customer to have even the extremely mild facial-recorgnition-and-slight-body-language relationship that you have with the person who usually bags your groceries. That person who bags your groceries can probably sustain that sort of extremely mild relationship with hundreds of customers, so we're talking about businesses that have something like a factor of 1000 more customers than they have employees.

That relationship is a prototype of a 'low-heft' good. Some actions by companies might increase heft. For example, GOG (a.k.a. "Good Old Games") prices their service as if you were purchasing physical goods, and when you log in shows an image of a shelf of boxed games. Both of those actions seem to me to be trying to increase heft, to convince you that your relationship with GOG ought to be considered analogous to a nearby shelf of objects. An author might sell a service, a service that is mostly simple text that might be delivered electronically, but in order to increase heft, they might print the text on thick paper, sign it by hand, and seal the paper with wax.

The service of showing a particular piece of admittedly charming art has some costs - bandwidth and power and the amortized initial cost of the artists' and programmers' labor - but the profit margin on a "virtual" good can be insanely huge. In the case of Zynga or similar, big well-funded companies that callously map our cognitive biases around our sense of value and exploit them so that some people purchase these goods with insane profit margins, actions to increase heft seem to be non-virtuous. But in the case of a small author trying to create a great customer experience, actions to increase heft seem to be virtuous. How do I distinguish one case from the other? Is it not fair for something so large and smart as a corporation to use those abilities to persuade people to basically give it money?
slithy

responsibility centers

Responsibility centers are an idea in management of corporations which might be relevant to software architecture.

Reinforcement learning (e.g. Sutton and Barto), and particularly hierarchical reinforcement learning (e.g. Dietterich's MAXQ), might be a good lens in order to understand responsibility centers from a programming perspective - once I understand responsibility centers as a management/corporate structure technique, they might be a good lens/metaphor to understand medium-large scale software architecture.

So imagine a business, a business in the business of being an agent - something like an Eve Online corporation consisting of only one capsuleer. There are a lot of actions that are available, things that might be advantageous to do, including buying things, selling things, moving from place to place. If you hooked up a simple, flat reinforcement learning algorithm to this agent, using profit as a reward signal, then it would learn, but it would learn slowly.

You could instead separate the overall task into subtasks such as procuring, freighting, sales, and strategy. Each subtask has responsibility for making a fraction of the choices that the original flat business/agent had. Each subtask will also need a 'scorecard' or modified reward signal, that indicates how well it is doing at its task. Ideally, since each task has a much easier task to learn, it will learn rapidly.

Furthermore, you can often arrange for the hierarchy to have some sharing, maybe a lot of sharing. In Dietterich's taxi example, going to pick up a passenger is a 'go' task that is shared with going to drop off a passenger. Sharing is good for two reasons: One reason is that the shared component gets more experience - so it moves up the learning curve faster. Another reason is that all of the supertasks improve when the shared component improves, even if they weren't even running at the time.

Software engineers often emphasize refactoring to reduce repetition. There is some advantage to reducing size, but even when it technically increases the lines of code, software engineers routinely advocate "reducing repetition". I think one good explanation is that we are not simply reducing repetition, but introducing sharing. The learning curve or reliability growth curve of a software component is a sequence of bug fixes and optimizations, and the software as a whole will improve faster if there is more sharing.

(I'm not sure if I've seen this kind of justification of the economic benefit of division of labor before - yes, its sortof an increasing returns to scale effect, but it's not like a steel tank where the cost is proportional to area and the benefit is proportional to volume.)

There's a taxonomy (which is probably taught because it is neat, even though the real world is scruffy) of so-called "responsibility centers", dividing them into cost centers, revenue centers, and profit centers. The cost centers are for subcomponents of the organization where you've given the manager the ability to control their costs, but not really the ability to control their revenues; their scorecard or reward signal shows how little they spent. Similarly, revenue centers are subcomponents where the manager can control revenue but not really costs, and profit centers are for subcomponents where the manager can control both.

In software architecture, it is routine to allocate responsibilities, but it's not as routine (or not as emphasized) that you also need to explain what "good" is for each component. Some components will be more or less performance-critical, and performance may be in different terms - latency or memory usage, for example.

I'm not sure to what extent real-world businesses can be described as using an object-oriented internal architecture. I imagine a listener department that, when a client comes in, spins up a whole client-specific division to deal with that client's requests - which is not that unrealistic - a construction company perhaps?