Reinforcement learning (e.g. Sutton and Barto), and particularly hierarchical reinforcement learning (e.g. Dietterich's MAXQ), might be a good lens in order to understand responsibility centers from a programming perspective - once I understand responsibility centers as a management/corporate structure technique, they might be a good lens/metaphor to understand medium-large scale software architecture.
So imagine a business, a business in the business of being an agent - something like an Eve Online corporation consisting of only one capsuleer. There are a lot of actions that are available, things that might be advantageous to do, including buying things, selling things, moving from place to place. If you hooked up a simple, flat reinforcement learning algorithm to this agent, using profit as a reward signal, then it would learn, but it would learn slowly.
You could instead separate the overall task into subtasks such as procuring, freighting, sales, and strategy. Each subtask has responsibility for making a fraction of the choices that the original flat business/agent had. Each subtask will also need a 'scorecard' or modified reward signal, that indicates how well it is doing at its task. Ideally, since each task has a much easier task to learn, it will learn rapidly.
Furthermore, you can often arrange for the hierarchy to have some sharing, maybe a lot of sharing. In Dietterich's taxi example, going to pick up a passenger is a 'go' task that is shared with going to drop off a passenger. Sharing is good for two reasons: One reason is that the shared component gets more experience - so it moves up the learning curve faster. Another reason is that all of the supertasks improve when the shared component improves, even if they weren't even running at the time.
Software engineers often emphasize refactoring to reduce repetition. There is some advantage to reducing size, but even when it technically increases the lines of code, software engineers routinely advocate "reducing repetition". I think one good explanation is that we are not simply reducing repetition, but introducing sharing. The learning curve or reliability growth curve of a software component is a sequence of bug fixes and optimizations, and the software as a whole will improve faster if there is more sharing.
(I'm not sure if I've seen this kind of justification of the economic benefit of division of labor before - yes, its sortof an increasing returns to scale effect, but it's not like a steel tank where the cost is proportional to area and the benefit is proportional to volume.)
There's a taxonomy (which is probably taught because it is neat, even though the real world is scruffy) of so-called "responsibility centers", dividing them into cost centers, revenue centers, and profit centers. The cost centers are for subcomponents of the organization where you've given the manager the ability to control their costs, but not really the ability to control their revenues; their scorecard or reward signal shows how little they spent. Similarly, revenue centers are subcomponents where the manager can control revenue but not really costs, and profit centers are for subcomponents where the manager can control both.
In software architecture, it is routine to allocate responsibilities, but it's not as routine (or not as emphasized) that you also need to explain what "good" is for each component. Some components will be more or less performance-critical, and performance may be in different terms - latency or memory usage, for example.
I'm not sure to what extent real-world businesses can be described as using an object-oriented internal architecture. I imagine a listener department that, when a client comes in, spins up a whole client-specific division to deal with that client's requests - which is not that unrealistic - a construction company perhaps?