You are viewing atheorist

Idle Speculations
[Most Recent Entries] [Calendar View] [Friends]

Below are the 20 most recent journal entries recorded in atheorist's LiveJournal:

    [ << Previous 20 ]
    Sunday, January 5th, 2014
    11:34 am
    the pong tutorial in clickteam's the games factory 2

    1. create a new game (control-n)
    2. click on it to see the properties on the left
    3. change the background color to black (settings/background color/black)
    4. switch to the frame editor (control-m)
    5. add an active object (alt-i, o, act, tab, enter, click)
    6. change movement to bouncing ball
    7. change speed to 40
    8. switch to the event editor (control-e)
    9. based on the ball, create a new condition, position, select all arrows pointing away from the center
    10. based on the ball, create a new action, movement, bounce
    11. go back to the frame editor (control-m)
    12. add a paddle to the playarea (alt-i, o, act, tab, enter, click)
    13. clone the paddle to somewhere else?
    14. select both paddles
    15. set their movement to eight directions
    16. change the right paddle's player property to 2
    17. change the game's properties - runtime options/player controls/player 2 should be wasd
    18. change back to the event editor (control-e)
    19. add a condition, based on one of the paddles, collision with / another object / the ball
    20. use the same action, bounce
    21. do the same thing with the other paddle (or drag the event you just made down to event 3 and change the other paddle instead)
    22. change the original bounce condition to not bounce when hitting left and right
    23. add a new condition that, if the ball leaves the play area on the left or right, it gets centered in the screen
    24. add a new condition that, if the ball leaves the play area on the left, the number of lives for player 1 gets decreased
    25. add a new condition that, if the ball leaves the play area on the right, the number of lives for player 2 gets decreased
    26. add a new condition that, if player 1's lives are zero, then storyboard controls / end the application
    27. add a new condition that, if player 2's lives are zero, then storyboard controls / end the application
    28. switch to the frame editor (control-m)
    29. insert a lives object (alt-i, o, liv, tab, enter, click)
    30. clone the lives object, place it above player 2 and change it to player 2
    31. -- TODO: add a background
    32. -- TODO: add sounds to the events

    require 'framework'
    -- create a new game (control-n)
    the_game = new_game()
    -- change the project default frame 1 to a black background
    the_frame = the_game.frames[1]
    the_frame.background = BLACK
    -- make an active object
    ball = new_active{ x = 320, y = 240 }
    -- give the active object the 4-direction movement
    ball.movement = BOUNCING_BALL
    ball.movement.speed = 40
    table.insert(the_frame.objects, player)
    -- if the ball leaves the frame via the top or bottom, it instead bounces
      new_event(ball:position{LEAVE_TOP, LEAVE_BOTTOM}):add_action(ball:movement(BOUNCE))
    paddle_1 = new_active{ x = 100, y = 240 }
    paddle_1.movement = EIGHT_DIRECTIONS
    paddle_2 = new_active{ x = 540, y = 240 }
    paddle_2.player = 2
    the_game.runtime_options.player_controls.player[2] = WASD
    -- if the ball leaves the play area on the left or right, it gets centered in the screen
      new_event(ball:position{LEAVE_LEFT, LEAVE_RIGHT}):add_action(ball:position{x = 320, y = 240})
    -- if the ball leaves the play area on the left, lives for player 1 gets decreased
    -- if the ball leaves the play area on the right, lives for player 2 gets decreased
    -- if player 1's lives are zero, then end the game
    -- if player 2's lives are zero, then end the game
    --[[ TODO
    -- switch to the frame editor (control-m)
    -- insert a lives object (alt-i, o, liv, tab, enter, click)
    -- clone the lives object, place it above player 2 and change it to player 2
    -- TODO: add a background
    -- TODO: add sounds to the events
    Wednesday, January 1st, 2014
    1:30 pm
    the scrolling tutorial in clickteam's games factory 2
    Design an API by imitating a GUI design success? This is based on a tutorial for games factory 2's scrolling.
    1. control-n to start a new project
    2. in workspace, click on frame 1, then in properties, click on 'virtual width', and change the default (640) to larger (6400)
    3. switch to frame editor (control-m)
    4. make a quickbackground object, something to see (alt-i, o, quick, tab, enter, click)
    5. make an active object (alt-i, o, act, tab, enter, click)
    6. give the active object the 8-direction movement (properties/movement/type/eight directions)
    7. switch to the event editor (control-e)
    8. make an always condition (alt-i, d, right-click on the special computer with a question mark, select always)
    9. right click on the intersection between always and storyboard controls (a chess knight in front of a chess board), and select scrollings/center window in frame
    10. in the dialog, pick 'relative to', click active, edit x and 7 to 0, ok
    11. test it by running it F7
    If you take that tutorial, and try to render it in lua-esque syntax, you might get something like this.
    require 'framework'
    -- start a new project
    the_game = new_game()
    -- change the project default frame 1 to a larger-than-default virtual width
    the_frame = the_game.frame[1]
    the_frame.virtual_width = 6400
    -- add something to see
    table.insert(the_frame.objects, new_quickbackground{ x = 100, y = 200 })
    -- make an active object
    player = new_active{ x = 200, y = 300 }
    -- give the active object the 8-direction movement
    player.movement = EIGHT_DIRECTIONS
    table.insert(the_frame.objects, player)
      new_event(ALWAYS, center_window_in_frame{relative_to = player, x = 0, y = 0})
    Can I develop a framework where this syntax works? Yes, sortof - using SDLFW (which is a young libsdl-bound-to-lua thing), I made something that sortof works:
    Friday, November 15th, 2013
    6:12 pm
    infrequently asked questions about accounting
    I am fascinated by some infrequently asked questions about accounting.

    One is "What is the relationship between accounting and dead-reckoning logs as kept by ancient sailors?".

    Another is "Why do accountants denominate all their accounts in cash?".

    I think I have a possible answer to the second. It has to do with some contingent facts about business, which have been stable for so long that they seem to be necessary.

    In business, you have various stores of value, and generally you have a "metabolic cycle" or "value cycle", which (if you're a going concern) is, in essence, over-unity. This is Marx's MCM, but it's more complicated if you are trying to investigate a specific business specifically; you might rent a truck and hire a driver and buy gasoline and maintenance and then exchange a promise to pick up and drop off a load at certain points for another promise to pay, and then go to the start point, exchange a receipt for a load, go to another place, exchange a load for a receipt, and then exchange the pair of receipts for money.

    In most of your stores of value, there is a carrying cost. If you own a truck, then you're vulnerable to a novel risk of your truck being stolen, or breaking down, or trucks in general being made illegal, or something. Therefore, as you grow (over unity remember?), you're probably not going to want to spread your value evenly among your various stores of value. Instead, you're going to want to minimize your "work in progress", and put most of your growth at the store of value with the lightest carrying cost.

    Cash (or cash equivalents) is generally the store of value with the lightest carrying cost. In unusual real world situations, something else (barrels of oil?) might be the store of value with the lightest carrying cost, the one that as you grow you want to keep most of your growth in, and so you might want to "recognize revenue" when you complete a cycle from barrels of oil back to over-unity barrels of oil. In other situations (a trader in the USD/EUR currency exchange market?), your "utility" might be a combination of two stores of value.

    That is, the accounting has ALWAYS been in units of utility. It's simply an artifact of the historical / contingent fact that businesses want to keep their growth in cash, that accountants denominate accounts in units of money. If you want a new accounting, trying to grow something other than money, you can still use a lot of existing ideas from accounting. For example, if you have an uncertain venture, you might put the potential downside into your balance sheet immediately, and only if and when it pays off, cancel it out.
    Monday, September 9th, 2013
    4:12 pm
    programming as casting metaphor
    There are various metaphors people use for programming; one example is "programming is like writing", another is "programming is like construction", mathematics, analysis, craft. More extreme examples are dreaming or cultivation.

    To add to this list, I would like to analogize programming to casting - pouring molten metal into the space between two (or in general several) molds, often made of a sand/clay mixture.

    * The metal is analogous to code.
    * The heat applied to the metal is analogous to the programmers, who (temporarily) "inhabit" the code, and understand it deeply, making it flexible.
    * The sand is analogous to requirements documentation.
    * The clay is analogous to customer or business representatives, who "inhabit" the requirements documentation and bind it together into a cohesive whole.
    * The two (usually) halves of the mold are analogous to the two (usually) major interfaces from the system to the external world - e.g. a system might interface to the customer (who purchases items from the system) and a warehouse.
    * The sprues, gates and runners which are necessary to cast, but also need to be removed from the final product are analogous to the build system and unit tests.
    * The machining of the final product is analogous to the deployment.

    The analogy extends to typical kinds of defects:

    * A misrun occurs when the molten metal does not fill out all of the requirements.
    * A cold shut occurs when the frontier of the molten metal pauses or dwells during pouring, and the programmers have to go back to forgotten code.
    * Mold mismatch is a requirements error that comes from lack of alignment between the different facets dictating requirements.
    * A sand inclusion can occur when requirement documents detach from being owned by the customer, becoming "lost purposes" that are owned and served by the programmers, complicating the system unnecessarily.
    * A run out is when requirements are so faulty that the programmers start writing code based on imaginary requirements.
    * A pipe is a void that occurs near the sprue feeding new code into the project; these can occur when a feature that needs to be done last is not done at all.
    * Mechanical damage can occur during the machining / deployment.
    * Bubbles and porosity are the classic bugs and they generally occur at so-called "hot spots", which are portions of the code base that are frequently touched and changed by programmers.

    To some extent, even the solutions that foundry workers use to deal with these problems sound pretty reasonable. For example, if you're experiencing porosity defects at hot spots, you might consider using chillers, which would be project management prioritizing getting programmers OUT of the hot spots, so that they solidify sooner. Or if your business requirements people are emitting bubbles of hot air when they interact with your programmers trying to write code, you might bake them (gently expose them to programmers who are not trying to write code) for a while beforehand to dry them out.
    Tuesday, August 27th, 2013
    5:24 pm
    geometry and self-replication
    Inspired by cp4space's blog about the geometry of self-replication:

    I've been obsessing about how, or in what sense, technologies are critters (not just memes) that replicate in the context of human beings.

    There's a standard "biblical" self-replicating loop, where a physical book, dropped into an environment with sufficient literate humans, blank books and writing implements, can (accidentally) be read by a human and (perhaps) persuade the human that it would be a good idea to copy out the book (exactly) onto a new book.

    A machine, such as a pump, can also self-replicate. At first I was thinking about a path involving a human disassembling the pump into parts, then using the parts as patterns in casting (perhaps sand casting) replacement parts. However, this cycle doesn't actually close - one generation might work well enough, but there is noise and shrinkage, which means that you can't keep doing it one generation after another into the future.

    If the part had something like G-Code printed on it, then that could obviously close the loop. (This is recognizably a quine, like the virus protein shell and the data inside it.) The human could obey the G-Code on some standard set of machining equipment - perhaps if we're being low-tech for intuition-boosting, the G-code might instruct the human interpreter in a sequence of ruler and compass constructions, which, executed in wax or wood and then sand-cast, recreate the part. However, parts do not generally come with G-Code printed on them, or even schematics.

    Schematics printed on the part would require competent machinists to be available in the context, but at least a schematic is digital.

    Still, the dimensions of a part ARE visible - if you carefully measure a part that you have, and make some reasonable guesses, you can generally recover the (digital) schematic of the part. For example:

    This suggests a very geeky variant of telephone. The originator picks a irrational number with a formula, something like cos(1)*ln(Pi)^2. They compute it out to 8 digits or so (0.70801566) and add a little random noise to it. They send the noisy number to their neighbor, who puts it into something like the inverse symbolic calculator, and guesses which formula it is (balancing concerns like simplicity, aesthetics, and nearness to the number that they received). Then they compute it out, add noise to it, and send the number onward.

    I think human beings must have a form of teleology sensors in our cognitive apparatus. Something about seeing the plan or mechanism behind an action, or even a device. It might be a combination of empathy and affordance sensors - the ability to see what you might be able to do with a thing.
    Friday, June 28th, 2013
    2:17 pm
    bond graph modeling of businesses and economies
    In bond graph modeling, the simplest way that two subparts of the whole can be connected is called a bond. The bond has a "typical power direction", pointing from one part to the other, represented by a arrowhead (traditionally, actually a half-arrowhead). It also has a "causal stroke", which can be on the same end of the bond as the arrowhead or the opposite end.

    Someone called Brewer suggested that bond graphs could be used for economic modeling. The bond represents repeated sales of something from one entity to another entity. The arrowhead indicates which entity is selling and which is buying. The causal bond indicates which side is setting the price (the other side gets to set the order rate e.g. units per year). Brewer's papers are hard to find, but I think I understand essentially what was in them.

    A simple model of a firm might have three "ports" - bonds piercing the envelope of the firm. One port is selling the finished product to customers. Another port is buying raw inputs from the raw market. The third port might be buying tools or machinery necessary to transform raw goods into finished goods. Internal to the firm, there might be stockpiles of raw and/or finished goods. The stockpile acts as a component that integrates order flow (or difference in order flow). That is, the raw stockpile contains the integral of the raw purchases minus the raw used up. The business might have a rule for setting the price based on the level in the stockpile. This kind of component is called a "C" component in bond graph modeling; in the electrical domain it would be a capacitor, and in the mechanical domain it would be a spring.

    For simplicity, let's assume that we run the machinery continuously. That means that the machinery sets a work rate, how fast raw is transformed into finished. If finished is more valuable than raw, then the machinery accumulates profit. If we continuously reinvest the profit in more machinery, then the level of machinery is the integral of the difference in price between finished and raw. This kind of component is called an "I" component; in the electrical domain it would be an inductor, and in the mechanical domain it would be a mass with inertia.

    If this firm starts buying a lot of raw (and possibly machinery), the raw and machinery markets may shift. If we simply model increased demand immediately causing increased price (via elasticity), then we can model the raw market as a curve that given a price, tells what the rate of supply be. In the opposite direction, we could model the machinery market as a curve that given a rate of purchases, tells what the price is. This kind of component is called an "R" component. In the electrical domain it would be a resistor, and in the mechanical domain it would be some kind of friction.

    The bond graph that I'm discussing looks like this:

    Doran and Parberry wrote an "Emergent Economies" paper, and Lars Doucet reimplemented it and published his code.

    The example of "an economy" in that paper has essentially five sectors, farmers, woodcutters, miners, refiners, and blacksmiths, and five goods, food, wood, ore, metal, tools. Roughly speaking, the farmers produce food using wood as a raw material, and the woodcutters produce wood using food as a raw material, but both depend on tools being ambiently available. So given tools, the farmer-woodcutter loop is a over-unity engine of growth. Similarly, the miners produce ore, the refiners turn ore into metal, and the blacksmiths turn metal into tools, but all depend on food being ambiently available. So given food, the miner-refiner-blacksmith loop is another over-unity engine of growth.

    It would be straightforward to duplicate the bond graph above five times and wire it together to form a bond graph model of the whole (tiny) economy. I think trying to "polish" the bond graph model against an agent-based simulation of that economy would be interesting; they're very different formalisms.
    Monday, May 27th, 2013
    8:39 am
    Galileo had an argument against Aristotle's law of gravity.
    Aristotle's law of gravity was that objects fall at a speed proportional to their weight.
    Galileo's argument against it was something like:
    "Consider two things connected together only loosely. On the one hand, considering them as the aggregate thing, they should fall fast. On the other hand, considering them as two separate things, they should fall slow. What would the tension in the last strand of twine be like? This is weird."

    This is a kind of argument from continuity - Aristotle's law has a discontinuity as you go from a single barbell-shaped object to two adjacent objects nearly touching. If we believe that the laws of nature ought to be continuous with respect to that transformation, then we can reject Aristotle's law from the armchair.

    One thing you can do with a circuit is draw its signal flow graph. For some simple circuits, drawing the signal flow graph is follow-your-nose easy. For some very slightly more complicated circuits, you get arguments like this:

    Consider a current source (Sf) wired up in parallel with two other branches. The first branch has a resistor (R1) and a voltage source (Se1), while the second branch is similar (R2, Se2).

    Trying to draw the signal flow graph (in the time domain), we might say:
    1. Start at the current source, Sf.
    2. Let x be the current through the 1 branch. (Note, we could have gone the other way).
    3. Then Sf-x is the current through the 2 branch.
    4. So (Sf-x)*R2 is voltage across R2.
    5. So (Sf-x)*R2+Se2 is the voltage across the whole circuit.
    6. So (Sf-x)*R2+Se2-Se1 is the voltage across R1.
    7. So ((Sf-x)*R2+Se2-Se1)/R1 is the current through the 1 branch, that is, x.
    8. Solving for x, we find that x==(Sf*R2+Se2-Se1)/(R1+R2).

    (This is called an algebraic loop in bond graph terms).

    This process of predicting what the circuit will do is not "shaped like the circuit". It involves steps that are contingent on feeling stuck, it has asymmetries where the circuit has symmetries, it's nonmechanical. We could make the final symbol-juggling almost arbitrarily hard by introducing nonlinearities, but apparently the circuit can juggle those symbols essentially instantaneously. This cannot be how the circuit itself computes its behavior.

    Perhaps it was foolish of me to believe that the human process of solving the easy circuits was analogous to the circuits' method of computing its behavior.

    If we spread the circuit far enough apart, the connections between the pieces will need to be modeled with transmission lines. In order for the constitutive laws to be continuous with respect to whether we model the circuit as containing transmission lines or not, the transients in the transmission-line variant of the circuit ought to die out quickly. Furthermore, the transients can probably be viewed as computing the answer to the set of constitutive equations, perhaps by iterative relaxation (Jacobi or Gauss-Seidel methods?)

    I think this might be a 7th or 14th order differential equation, depending on how you model transmission lines (or whether you count a complex number as one or two degrees of freedom)? Regardless, it probably converges toward a steady state pretty rapidly in a lot of reasonable models of transmission lines.

    That is, the circuit laws don't care exactly how you parse the circuit, because they're continuous with respect to "nearly the same" parses, even though I personally feel more confident in solving a single linear equation than computing the steady-state behavior of a differential equation.
    Wednesday, April 10th, 2013
    6:36 am
    dataflow / class diagram duality
    Class diagrams in UML (for my purposes) have boxes, two kinds of arrows (isa and hasa) between boxes, and methods, which are essentially strings, inside the boxes. Class diagrams correspond to object-oriented code in that each box probably has a section of code corresponding to it, each method probably has a section of code corresponding to it, and each arrow leaving a box indicates a collaborator that will need to be considered.

    Objects can be encoded into functional languages like ML or Haskell by an object-closure correspondence. Where the object-oriented code creates a new object, the functional code would define a new functional value, using a lambda. Where the object-oriented code invokes a method on an object, the functional code calls a function value. In order to model method dispatch, the functional code might call a function value representing the object as a whole, passing a known-at-compile-time value of an enumerated type as an argument, and then call the returned (function) value with the arguments of the method, something like this:

      v(PRINT)('hello %s', {'world'});

    If you have functional code, but you need object-oriented code, then you can go the other way. (This is called defunctionalization, and the experts to google and read are Danvy and Reynolds.) For each function value constructed in the source code, you need a constructor, and usually a class. (Multiple constructors on the same class is possible, but dubious if you're shoehorning, and unlikely if you're not shoehorning.) The lexically-scoped variables Then you need to study the dataflow, and find out where the function value will be consumed (applied). That will give you your method name. Generally the dataflow looks like a river - several tributaries coming together to one port where the river ends. In statically typed object-oriented languages like C++ or Java, that means that you will need an interface (or pure virtual abstract class, same difference), corresponding to the consumption point. Then each of the origin points will need to declare that they implement that interface.

    tl;dr - you can recognize "wannabe-functional" code in an OO language from the class diagram - it has a bunch of one-method classes, often in little groups (an interface and its several concrete implementations).
    Tuesday, March 26th, 2013
    11:07 am
    calibration as a business model
    A startup needs to do a couple things. First and most importantly, it needs to create some value for the customer. Subsidiary second and third goals are to capture that value (the startup wants to be "sticky") and to create some sort of barrier to other businesses entering.

    Calibration is a process of doing something moderately easy, just a little finicky, "forward" many times, in order to gain the ability to "magically" do it backward. For example, if you have a balance, some gram weights and a wine glass, then you can pretty easily fill the wine glass with 10 grams of wine and take a picture. If you repeatedly, and carefully do this for 12, 14, 20, 100 grams, then you can build a database of images of wine glasses tagged with how many grams of wine are in it. Then you could use that database to go backward from an image to the information of much wine is in it.

    First: The "magical" quality of the backward direction fits well with creating some value for the customer.

    Second: By offering the calibration as a service, the startup might well be able to capture that value.

    Third: The expense of creating the database is actually good, because it forms a (small) barrier to other businesses entering.

    To use this "calibration" template to generate ideas for businesses, you need find a problem that people actually want solved, that is relatively easy to do "backward" - which is still hard, but perhaps easier than the initial problem of "an idea for a business". Perhaps one could accumulate a personal library of techniques like this calibration template, something like a personal, business-focused variant TRIZ.
    Saturday, November 10th, 2012
    12:26 pm
    the relation between mathematics and mathematical logic
    Okay, so mathematicians are awesome, and I wish I was one. Also, they do lots of different things - some of them work with numbers, others with linear transformations, others with partial orders, manifolds, permutations, braids, categories, sheaves, all kinds of things. One of the side fields (kind of a backwater, really) is mathematical logic, also (pompously) called "foundations of mathematics". This makes it sound important, but think of it like the relationship between biology and physics. If a biologist makes a discovery about the embryonic development of a nematode, then that's great. It has an explanation in terms of physics, to be sure, but the biologist can work and make progress quite independently of the physicists. The biologist's discovery must be compatible with the physicists' theories - but it's not the biologist's job to make it compatible. If it's anyone's job, it's the physicists' job - their theories had better be compatible with the observed biology.

    Inside of mathematical logic, there are philosophical positions, and people like me, who follow these things, get peculiarly emotionally attached to various positions. At the moment, a moderately high-status person-on-the-internet is claiming that second-order logic is NECESSARY for us to talk about the integers. Clearly, I am convinced that they are wrong-on-the-internet. In working out my (irrational, ridiculous) emotions, I am writing a blog post.

    boring beyond beliefCollapse )
    Friday, November 9th, 2012
    3:29 pm
    learning to program
    One of the first things that you need to do if you want to learn to program is to look at the connections between syntax and trees and nested parentheses and semantics. We speak to one another (and write to one another) so fluently that it's not obvious that natural language is in some important sense tree-shaped. Someone who has never studied linguistics might be distracted by the (less important) sense in which it is line-shaped - in time (if we are speaking or gesturing) or possibly space (if we are writing).

    The tree diagram is nice to have in mind, but it is not terse - it takes a huge amount of visual space. (Oriented) trees are isomorphic to sequences of balanced parentheses - ((()())(()())) is like a tree with seven nodes. If we have words labeling each node of the tree, then on the balanced parentheses side we get something like this: top(left(ll(), lr()), right(rl(), rr())).

    It is nice to be able to walk back and forth. One direction is from ordinary language to trees to balanced-parentheses representing trees in order to generate (a starting point for) code. The other direction is from at-first-inscrutable formal text to trees and from trees to some natural-language pronunciation of the formal text in order to read and have at least a stab at understanding code that someone else wrote.

    After that basic step, there are a lot of "next steps". One important next step is learning to learn - how to search for information in an archive, how to learn how something works by experimenting with it, how to feel more subtle emotions about your own ignorance than simple despair or simple overconfidence. Another important next step is learning about how the world is put together - How did this text get onto this screen? How are parsers typically constructed? How does the internet work? What is a von Neumann computer? A third important next step is learning abstraction techniques. The first abstraction is procedures - I think staying away from objects and classes is a good idea for a while (and there are plenty of other abstractions beyond those). However, it might be that all of those can be learned "accidentally" while trying to complete various programming tasks.

    It's sad that there are not more diverse fun learn-to-program domains; there is drawing, including subvariants of turtle graphics and vector graphics similar to PostScript or Processing. There is the console interface. There are innumerable kinds of tank battles. There is Core Wars. I don't know of any fun database-backed-business domains, even though they would be really educational. There don't seem to be lots of bioinformatics or nano/biotech oriented games - there's FoldIt and Organic Builder and SpaceChem.
    Thursday, October 18th, 2012
    1:30 pm
    interfaces in game design
    In some games, prominantly collectible card games like M:tG,
    and deck-building card games like Dominion,
    but also fragments of older games such as Monopoly's Chance and Community Chest,
    the player needs to read and interpret (obey)
    sentences and paragraphs printed on the cards as rules.

    This is in contrast to games such as Chess,
    which may have rules, but during play,
    the players do not generally read and interpret the rules.

    In a recent RPG such as WoW or Torchlight,
    these blocks of text are printed on the player's spells and abilities,
    as well as their equipment.
    A significant fraction of the RPG experience is 'character building',
    where the player considers these rules, and particularly their synergies,
    when they are choosing among several spells or abilities to invest in,
    and when choosing among several pieces of equiment.
    This aspect is closely analogous to deck-building in a collectible card game.

    In modeling a game with this structure,
    you may want to abstract away from the actual paragraphs printed,
    (in a CCG like M:tG, the owner/operator of the game will want to
    continuously print new paragraphs,
    and focus instead on the interface
    in between the paragraphs.

    It can be difficult to figure out what the interface is.
    One test for whether something is part of the interface,
    is whether it is universal, across all decks or classes or races.
    In order for things to 'plug together', they need to be universal.

    A sword says it has "+1 str".
    Is it the case that every character has a 'str'?
    Then str is part of the interface.

    A sword says it does '+10 fire damage'
    Is it the case that every blow has damage?
    Is it the case that every bit of damage can be categorized as one of 'physical, fire, lightning, poison'?
    That's part of the interface.

    A ability says it does '+10 fire damage to water-type mosters'
    Is it the case that every monster can be categorized as one of Foo, Bar, Baz types? Then that's part of the interface.

    In a collectible card game, there is often a standard layout
    for every card, or a few different standard layouts for different categories.
    There might even be an explanation of the standard layout
    (this is where the cost is printed, this is where the offensive stat is
    printed, this is where the health stat is printed, et cetera).
    That standard layout is filld with universals and therefore probably
    valuable for the interface.

    Monopoly Chance and Community Chest interface, with the universals:
    Pay $ / Collect $ / Pay Each Player $ / Collect $ from each player (Every player has a stock of cash)
    Advance to Nearest (Every player has a token at some specific place)
    "Advance to nearest railroad and pay owner
    twice the rental that they are otherwise entitled,
    if unowned you may buy it from the bank." (Every property is either unowned,
    or owned by some specific player. Every owned property has a rental cost.)
    Wednesday, October 10th, 2012
    4:36 pm
    incoherent gesturing regarding utility functions and accounting
    Okay, let's assume that we understand the difference between an inferred utility function (e.g. such and such robot in fact destroys blue things therefore its utility function has a negative coefficient for 'blue things in the world'), and a utility function component in a system design (e.g. here is where the utility function is stored). We might call the latter the "utility function design pattern", and we might find examples of it in control systems where the target state, and the distance from the current state to the target state, are both actual boxes on the blueprint.

    If the system that you're building is moderately large, then you might, as a first order approximation, express the utility of the current state as an additively separable function of the states of the subcomponents.

    Utility is funny stuff, and these numbers attached to subcomponents are extra funny because they add up to an approximation. One funny thing about utility is that it doesn't have natural units, like charge. If you multiply your utility function by a constant, the new function will guide you to the same decisions. The accountants use units of cash, but they don't really mean what you might think it means - if the 'raw materials' account has $1m in it they don't really mean that you could acquire equivalent raw materials for $1m, nor do they mean that you could dispose of those raw materials in a hypothetical bankruptcy auction for $1m. There is a "net present value of future cash flows" interpretation of that number that sortof is justified, and I'll explain that in a bit, but for now I think it's easier to think of it as a convention that most businesses have a subcomponent of their operation which is "cash on hand", and utility functions have a loose degree of freedom, and pinning that loose degree down so that the coefficient of "utils per dollar" is 1 is conventional.

    In double-entry accounting, there's a rule regarding conservation of utility. If an entry is purely internal to the entity doing the accounting, no interaction with the market, then utility is supposed to be conserved. This actually makes sense from a reinforcement-learning perspective. The terminal value for firms is cash. We assume the Bellman equation for backpropagating reward coefficients is satisfied, then a purely-internal transaction just moves value / utility / reward around.

    The Bellman equation explains the "net present value of discounted cash flow" that I mentioned before - since cash flow corresponds to reinforcement learning reward for firms, if everything is working right, and we're doing discounting rather than episodic learning, then we can relate the utility of the present state through perhaps many iterations of Bellman backprop to a sequence of future cash flows. However, because of the distance of the inference, and because the accounts are just a first-order approximation, it's not always a great idea to connect the number in the account to any particular anticipated sequence of cash flows.

    If you wanted to generalize double-entry accounting to deal with solitaire scenarios (such as Minecraft), then you would need to identify what your terminal values for the scenario actually are. If you only value gold, then all of your accounts, your tools and weapons, your fortifications and fixed assets, can be denominated in gold. If you actually intrinsically value lots of things - how many verbs you have available (freedom), how many different models you see (art content), how deep or high you've dug or built or visited (achievements), and so on - then you probably should have accounts denominated in something like 'utils', rather than choosing to use gold or silver (because real-world accountants always use dollars or the local currency, and gold and silver are the best in-game approximation to currency?).

    More interestingly, even in games like SimCity or Eve that have an explicit in-game entity called "money", if you don't simply want more and more money but instead want to play in a sandboxy (yet optimized) way, then you probably shouldn't denominate your internal accounts in "money". Instead, you will have an account called "money", that holds utils.

    One of the funny things about accounts as a first-order approximation of your actual utility function is price changes. Even if the state of a subcomponent (account) stays the same, if circumstances change then the utility contributed to your total utility by that subcomponent (account) may change, perhaps wildly. One way to you might explain this is that you have an internal price as well as an internal "inventory", and the price changed, even though the inventory didn't. The inventory is an abstraction (just like the utility was an abstraction), and the price is more of a coefficient than anything set by a market.

    Gah. I am too vague.
    Wednesday, September 26th, 2012
    4:37 pm
    what I did today
    There was a problem, and the microfeature that I was supposed to have done sometime last week turned out not to be working. So I investigated, and (re)figured out that the intended functionality was that message would be received at a particular class, but that method was polymorphically overridden by another method in a subclass, which delegated to a different method on the same class, which was polymorphically overridden by a subclass, which delegated to an interface, which had a default implementation that did nothing, and two subclasses below the interface the method that looked ALMOST like it was overriding the default implementation wasn't ACTUALLY overriding it, so (understandably) the signal was not emitted and the slot was not called and the slot did not delegate to the widget that, had it been notified, would have actually done what it was supposed to do.

    Perhaps programmers ought learn a language that has INTERCAL's COMEFROM instruction, soley in order to understand how misuse of inheritance can be a bad thing.

    If this reminds you of Heath Robinson and/or Rube Goldberg, you are correct. What is perhaps sad is that ridiculous layering and indirection and redundancy is actually how most software works - so far as I can tell, not having examined all software everywhere - admittedly, this is because to get almost anything done, a causal chain needs to traverse a huge number of ossified organizational barriers. The widget library, application, networking library, kernel, and service are probably all written by separate organizations, and it is only by looking at a particular transverse chain of events (as you might if you were debugging) that you can perceive the ridiculousness that is happening EVERY SECOND.

    However, even within one fairly small team, accidental complexity can cause those ossified barriers and ridiculous chains, if it is not assiduously cleaned up. Sigh.
    Tuesday, September 18th, 2012
    4:28 pm
    opposite of a meme
    So memes, genes, and chain letters are a kind of thing, a message of some sort that may in the right host environment provoke a chain of events that yields more copies of that thing.

    Individual humans are partly descendents of memes; I've previously argued that we should not identify solely with our genetic heritage and neglect (more chronologically rapid, but vastly higher-bandwidth) our memetic heritage.

    There doesn't seem to be a word for what aspect of a human is entirely non-memetic - meaning also non-genetic.

    All of life exists only outside of thermodynamic equilibrium - we are mostly powered by the sun, but if there were a thin layer of mucky fluid in between a slightly hotter surface and a slightly cooler surface, then life could evolve to take advantage of that heat flow. If the surfaces were the same temperature, but they were moving steadily relative to one another, then life would evolve to take advantage of that form of energy.

    There are other non-equilibrium dynamics that have some qualities of life - convection cells or vortices (basically the same thing, just the direction you're looking at it from). Storms are powered by the earth's surface being hotter than the sky, and a cell is a column of rising hot air that spreads out and falls in a circular curtain of cool air. Cells have "metabolism", and they're autocatalytic - self-creating, as well as catalyzing the formation of nearby cells. So the opposite of a meme might be a vortex, or a stand-alone metabolism.

    I think corporations have a metabolic aspect. If you take the complexity of a corporation to include the complexity of the humans working for it, they're of course more complicated than single humans. But if you take the complexity of the framework of corporate policies that would stay the same if somehow there was very high personnel turnover for a short burst, and every employee and owner swapped out for someone else, then they're complicated (among the most complicated things that we humans have ever built), but they're not impossibly complicated - comparable to a medium-large piece of software, perhaps.

    If we take Star Trek as a starting point, and imagine that spaceships correspond to businesses / metabolisms / autocatalytic storm cells, and that the people beaming in and out of the starship correspond to memes, what kind of vision of the future do you get? One difference is that beaming isn't a kind of motion - it's a kind of copying. Essentially every time someone is beamed, they're duplicated. Another aspect is that people do not really grow or change - memory is probably something that the starship as a whole has, not a meme. A message stays the same message, even if (by combining it with some other premise) you reach a conclusion which is distinct.

    A major activity of the starship might be critical thinking - accepting some visitors who beam onboard, and generally trying them out. Some of these visitors might be manipulative or deceiving - trying overtly or subtly to harm the starship as a whole. Others of these visitors might become valued members of the crew.

    This analogy meshes with Greg Egan's vision of a polis (in his novel Diaspora), but makes it a little more clear that the primary reproductive entity in this future is the starship / polis, not the memes that inhabit it - they don't really know how to autocatalyze on their own.
    Sunday, August 12th, 2012
    1:34 pm
    I made a decision. It wasn't a very important decision, it was in a game, and I made it somewhat thoughtlessly, but I want to analyze it in order to understand how to make decisions in general.

    I had a pile of recyclables - note that there's abstraction here, each item in the pile had a lot of details to it, what kind of thing it was, how damaged it was, what kind of mats I could get out of it if I chose to recycle it rather than repair it. I could have recycled everything immediately and I would have gotten some mats. The mats don't have as much detail as the recyclables, they're more fungible, but I'm still abstracting a bit to characterize them as 'a pile of mats'. Instead I decided to schlep the recyclables over to a different station, which has a bonus for recycling efficiency, and then schlep the resulting mats back to the original station.

    In doing that, I'm using a fixed asset (the robot that I was schlepping the stuff inside) and I was spending some of my own time. Was the decision worth it?

    I think it's useful to analogize double-entry bookkeeping (which is not an enormously awesome idea, just a moderately useful and moderately confusing one) to event sourcing. Event sourcing emphasizes that an abstract data type can be isomorphic to a free data type modulo an equivalence relation. The free data type is something like a log of every command. Then the equivalence relation indicates which sequences of commands end up at equivalent points - such as pushing an element followed by popping that element, which might be equivalent to doing nothing at all.

    Event sourcing allows you to change what the rules are, and figure out where you would be if you had been using those rules all along. The system of one 'everything' log, which has a (perhaps wordy) description of what the transaction really is, and calls out all the accounts touched, and account-specific logs that point (perhaps with a transaction number) back to the 'everything' log is maintaining indexes.

    Then there's a sortof squashing step that compresses each account-specific log into a concise status of that account. Then there's a utility function which takes the statuses of all the accounts and gives a utility, a number. Usually you can partially differentiate the utility function with respect to each account and get a 'price'. If you multiply the prices by the statuses then you get a number which could be called the 'value stored in that account', but it doesn't necessarily have anything to do with either the amount you spent to get that status (that's the sunk cost fallacy) nor does it necessarily have anything to do with the amount you could get on the market by liquidating the underlying goods representing that account. If we believe that the account measures something of purely instrumental value, of no intrinsic worth, simply useful to achieve other things that we actually value, then it may have something to do with future utility flows associated with that account.

    So in this decision, there might be six accounts:

    There might be six accounts.

    1. recyclables in location 1

    2. mats in location 1

    3. robot

    4. time

    5. recyclables in location 2

    6. mats in location 2

    There might be transactions like:

    1. take recyclables in robot to location 2 (decrease recyclables in location 1, increase recyclables in location 2, record use of robot, record time spent)

    2. recycle (decrease recyclables in location 2, increase mats in location 2)

    3. take mats in robot to location 1 (decrease mats in location 1, increase mats in location 1, record use of robot, record time spent)

    And the alternative sequence of transactions is just the single transaction of doing recycling the stuff in location 1. If we stack those three transactions together, cancelling where possible, we can either derive something about the utility function (in order for it to ratify the choice) or we can praise or blame the choice (assuming the utility function is constant).

    In the first case, the value in utility of the excess mats from the more-efficient recycling would have to be equal to or greater than the value of the time spent. If the utility function is constant, then we might be able to say that the choice was a loss (for example, spending too much time for too little a return). If the utility function is constant and the choice was a good one, then we could allocate the excess utility to the robot's account. By logging the excess utility enabled by the robot, assuming that the future will be similar to the past, and discounting the future flows of utility, then we might generate a net-present-value figure for the robot that would enable us to evaluate deals that include losing the robot.

    Friday, August 10th, 2012
    10:51 am
    heft and customer experience
    As you probably know, people frequently pay more for a product that they can hold, that they enjoy holding, than for a more "virtual" product. For example, Apple makes a lot of money selling physical objects such as iPads and iPhones and pays a lot of attention to the "feel" of the interaction - the swiping and so on. By contrast, products produced by Apple ecosystem developers, apps and eBooks, are remarkably difficult to make money on - some of this is of course, Apple extracting money from the ecosystem, but there's also something about customer's purchasing behavior and what they perceive as worth paying money for.

    Another similar (in my mind) example is teleportation in massively multiplayer online games - it is a comparatively easy feature to develop or add to a game, and the existing playerbase would certainly ask for it and use it, but many games do not have teleportation. I believe this is because they believe that teleportation would damage the "feel" of the virtual world, making it more similar to experiences such as browsing the web or instant messaging, which (though compelling) are not easy to monetize. Fewer people would join or become committed to the game, and existing players would more quickly leave to other activities.

    Let's call this quality with those two examples "heft". You might prefer a tool that has a nice solid 'snick' when it opens or operates; that's an example of heft. My question is - to what extent is pursuing heft virtuous? Is it reasonable to justify pursuing heft as 'This is one of many things that people enjoy and I am providing a good customer experience.'? Or is heft a kind of cognitive bias, like hyperbolic discounting? I certainly think that taking money from someone who is cognitively disabled is reprehensible, and we are all cognitively disabled by comparison to big well-funded organizations composed of professionals who work closely with computers.

    People nowadays have relationships with software-as-service businesses, businesses comprised of relatively few humans, and relatively many computers (in big datacenters which may or may not be owned and operated by the customer-facing business). When I say 'relatively few humans' I mean the number of customers is so much higher than the number of employees that it would be infeasible for the customer to have even the extremely mild facial-recorgnition-and-slight-body-language relationship that you have with the person who usually bags your groceries. That person who bags your groceries can probably sustain that sort of extremely mild relationship with hundreds of customers, so we're talking about businesses that have something like a factor of 1000 more customers than they have employees.

    That relationship is a prototype of a 'low-heft' good. Some actions by companies might increase heft. For example, GOG (a.k.a. "Good Old Games") prices their service as if you were purchasing physical goods, and when you log in shows an image of a shelf of boxed games. Both of those actions seem to me to be trying to increase heft, to convince you that your relationship with GOG ought to be considered analogous to a nearby shelf of objects. An author might sell a service, a service that is mostly simple text that might be delivered electronically, but in order to increase heft, they might print the text on thick paper, sign it by hand, and seal the paper with wax.

    The service of showing a particular piece of admittedly charming art has some costs - bandwidth and power and the amortized initial cost of the artists' and programmers' labor - but the profit margin on a "virtual" good can be insanely huge. In the case of Zynga or similar, big well-funded companies that callously map our cognitive biases around our sense of value and exploit them so that some people purchase these goods with insane profit margins, actions to increase heft seem to be non-virtuous. But in the case of a small author trying to create a great customer experience, actions to increase heft seem to be virtuous. How do I distinguish one case from the other? Is it not fair for something so large and smart as a corporation to use those abilities to persuade people to basically give it money?
    Saturday, August 4th, 2012
    5:17 pm
    Wednesday, August 1st, 2012
    2:53 pm
    responsibility centers
    Responsibility centers are an idea in management of corporations which might be relevant to software architecture.

    Reinforcement learning (e.g. Sutton and Barto), and particularly hierarchical reinforcement learning (e.g. Dietterich's MAXQ), might be a good lens in order to understand responsibility centers from a programming perspective - once I understand responsibility centers as a management/corporate structure technique, they might be a good lens/metaphor to understand medium-large scale software architecture.

    So imagine a business, a business in the business of being an agent - something like an Eve Online corporation consisting of only one capsuleer. There are a lot of actions that are available, things that might be advantageous to do, including buying things, selling things, moving from place to place. If you hooked up a simple, flat reinforcement learning algorithm to this agent, using profit as a reward signal, then it would learn, but it would learn slowly.

    You could instead separate the overall task into subtasks such as procuring, freighting, sales, and strategy. Each subtask has responsibility for making a fraction of the choices that the original flat business/agent had. Each subtask will also need a 'scorecard' or modified reward signal, that indicates how well it is doing at its task. Ideally, since each task has a much easier task to learn, it will learn rapidly.

    Furthermore, you can often arrange for the hierarchy to have some sharing, maybe a lot of sharing. In Dietterich's taxi example, going to pick up a passenger is a 'go' task that is shared with going to drop off a passenger. Sharing is good for two reasons: One reason is that the shared component gets more experience - so it moves up the learning curve faster. Another reason is that all of the supertasks improve when the shared component improves, even if they weren't even running at the time.

    Software engineers often emphasize refactoring to reduce repetition. There is some advantage to reducing size, but even when it technically increases the lines of code, software engineers routinely advocate "reducing repetition". I think one good explanation is that we are not simply reducing repetition, but introducing sharing. The learning curve or reliability growth curve of a software component is a sequence of bug fixes and optimizations, and the software as a whole will improve faster if there is more sharing.

    (I'm not sure if I've seen this kind of justification of the economic benefit of division of labor before - yes, its sortof an increasing returns to scale effect, but it's not like a steel tank where the cost is proportional to area and the benefit is proportional to volume.)

    There's a taxonomy (which is probably taught because it is neat, even though the real world is scruffy) of so-called "responsibility centers", dividing them into cost centers, revenue centers, and profit centers. The cost centers are for subcomponents of the organization where you've given the manager the ability to control their costs, but not really the ability to control their revenues; their scorecard or reward signal shows how little they spent. Similarly, revenue centers are subcomponents where the manager can control revenue but not really costs, and profit centers are for subcomponents where the manager can control both.

    In software architecture, it is routine to allocate responsibilities, but it's not as routine (or not as emphasized) that you also need to explain what "good" is for each component. Some components will be more or less performance-critical, and performance may be in different terms - latency or memory usage, for example.

    I'm not sure to what extent real-world businesses can be described as using an object-oriented internal architecture. I imagine a listener department that, when a client comes in, spins up a whole client-specific division to deal with that client's requests - which is not that unrealistic - a construction company perhaps?
    Monday, July 23rd, 2012
    4:08 pm
    Change emphasis on management of software engineers
    (This is some observations about my current workplace; it's probably not correct about management of software engineers in general.)

    First, the value of having a dedicated, long-term software engineering team is bursty. Some specific items - fast turnaround of an apparently tricky diagnosis problem or user experience change - can drop from a month or two of schedule to 15 minutes. Though you presumably have some tasks for your team that they're continuously working on, the vast majority of the value of having the team comes from those bursts. If the value of having the team came primarily from their steady effort, then you would never interrupt them with a request or query - you would instead use the usual software change pipeline.

    Admitting that their value is bursty changes your attitude toward training. It might be fine for them to spend one or two days a WEEK on training if they are available for queries and urgent tasks during the training, and if the training makes them more able to execute those bursty maneuvers that generate so much value for your organization. It might be more appropriate to treat them like a team of athletes who train for months and then succeed at climbing boulder or a summit than like bricklayers who make progress for months and then finish.

    Another way this focus on bursty value changes your attitude is regarding whether to accept "I dunno" and a proffered workaround as a response to bugs. A bug is evidence of a hole in your engineering team's understanding of the entire system. If the primary value of your software engineers is their understanding of the entire system, then keeping them working to really find a root cause is part of their core function. If the value of your software engineers were the lines of code that they turn out, then bug diagnosis would be taking valuable time away from writing lines of code.

    Second, software is intellectual property, but it's intellectual property like land or buildings. Most corporations (not real estate or development ones) need access to land or buildings to do what they're actually trying to do. They might own their land or buildings, but sometimes they lease it, and regardless they probably have finance experts periodically looking at their holdings and considering the question of "are we holding too much property?". According to my cut-and-paste duplication detector, and a very conservative estimate of $1/loc/year maintenance across a 10 years lifetime, there's $40k lying on the ground in the form of unnecessary duplication that might take an intern two weeks to clean up (and we need educational tasks like that to give to interns anyway).

    I don't know what it is - is the idea of reading code that off-putting that finance people (who presumably wade through very dry stuff in their usual work) can't imagine skimming it to see whether there's gruesomely obvious repetitious stretches that should be chopped out? Or is it the sunk cost fallacy, that the organization has spent an enormous amount of money specifying, developing, and testing this code, so it must be valuable and we want to hold on to it, not get rid of it?
[ << Previous 20 ]
My Website   About