Jekyll2019-02-13T00:36:00+00:00/feed.xmlDesearch and RevelopmentExperiments in abstract thought and the theory of software engineering.Cartographic Software Engineering2018-11-21T19:54:00+00:002018-11-21T19:54:00+00:00/software/2018/11/21/cartographic-software-engineering<blockquote>
<p>“The utility of geography in matters of small concern, also, is quite evident; for instance, in hunting. A hunter will be more successful in the chase if he knows the character and extent of the forest; and after, only one who knows a region can advantageously pitch camp there, or set an ambush, or direct a march. The utility of geography is more conspicuous, however, in great undertakings, in proportion as the prizes of knowledge and the disasters that result from ignorance are greater.”</p>
</blockquote>
<p>Strabo, <em>“Geographica”</em></p>
<h3 id="introduction">Introduction</h3>
<p>Imagine, for once, that we chose to treat the software we produced not as a mental representation actualized, a blueprint constructed, or a sculpture refactored to perfection.
Suppose, rather, that we chose to treat it as an <a href="https://en.wikipedia.org/wiki/Ecumene">ecumene</a> - a habitable world, a territory organized atop a wilder underlying landscape.</p>
<p><img src="/assets/images/cartographic-software-engineering.jpg" width="450" style="float: right; padding: 1em" /></p>
<p>Our software, then, would be the small village that arose first by laboriously clearing and driving back the wild forest.
Whose buildings and fortifications were constructed - with varying levels of mastery - from the ready materials of that environment and whatever tools and techniques were brought along by its engineers, artisans, and craftsmen.
A small piece of organized space in a wild and disorganized environment.</p>
<p>The terrain - all those strata we build on top of: hardware, operating systems, networks, languages, libraries, cloud platforms, and web APIs - would be subject to phenomena at various time scales: weather, seasons, wars, socio-political phase changes, and plate tectonics.</p>
<p>The experience provided by construction itself would lead to improvements in tools and techniques as well as the creation of novel tools and techniques.
A local culture would arise: cuisine, language, style, routines.
Such local culture makes every mature codebase recognizable as <a href="https://en.wikipedia.org/wiki/Sui_generis">sui generis</a>.</p>
<p>How would our methods change if we thought of our software this way?
We would think of our services as villages, towns, cities, and countries.
We would think of our dependencies as geological strata, environments, and local ecologies.
The impact of such a change in thinking is not immediately obvious to those of us schooled in modern methods of software engineering.</p>
<p>I posit that this would mean for us to take seriously the idea of a software geography, and especially to take seriously that idea by which geographers most immediately master and navigate a territory - cartography.
Let us then sketch the first outlines of a <em>software cartography</em>.</p>
<h1 id="viewpoint-construction-as-primary-art">Viewpoint Construction as Primary Art</h1>
<p>It is an underappreciated fact that software systems are incapable of singular abstract representation.
No single image or document could ever fully capture a piece of software.
Instead, every piece of software must be represented at multiple levels of abstraction and points-of-view simultaneously - machine level, programming language level, UI level, system diagram level, executive summary level, business strategy level.
For this very reason, the <a href="https://en.wikipedia.org/wiki/ISO/IEC_42010">IEEE 42010 standard</a> chooses to emphasize the use of views and viewpoints over any single technique for describing a piece of software.
This recognizes that the adequate description of a single software component may require a <a href="https://en.wikipedia.org/wiki/System_context_diagram">system diagram view</a>, a <a href="https://en.wikipedia.org/wiki/Class_diagram">class diagram view</a>, a <a href="https://en.wikipedia.org/wiki/Finite-state_machine">state-machine view</a>, and a logic specification view (of which I could find no good Googleable examples, but you can learn more (<a href="https://www.amazon.com/PSP-Self-Improvement-Process-Software-Engineers/dp/0321305493/ref=sr_1_2?ie=UTF8&qid=1518669892&sr=8-2">from this book</a> <sup id="fnref:1"><a href="#fn:1" class="footnote">1</a></sup>) - none of which is adequate by itself to describe the complete component.</p>
<p><img src="/assets/images/ancient-skies.png" width="350" style="float:left; padding: 1em" /></p>
<p>Much like a map, which can be constructed at the level of a continent, a country, a province, a building, etc.
A software system view can be constructed at varying levels of abstraction, and as the level of abstraction rises, the level of detail necessarily falls.</p>
<p>This leads naturally to the idea that different levels of abstraction will be appropriate for different activities.
One cannot guide an entire deparatment using a class diagram and similarly one cannot build a class diagram from a mission and vision statement.
Yet both levels of detail are related and necessary for the harmonious operation of the whole and the achievement of the goal.</p>
<p>So far, we have discussed viewpoints as if the view was what was to be captured, but what about the viewer themselves?
So long as software has not become completely autonomous, human symbiosis and interaction is still required.
Software is made to be used and administered.
Often by several different classes of users and administrators.
Therefore, the roles of these users themselves may be worth capturing, especially insofar as these roles will need to be on-boarded or eventually automated.
People too are components of any real system.</p>
<p>Seen this way, viewpoint construction is, in the broadest sense, a primary art in the creation of software systems.
Therefore, any proper <em>software cartography</em> must take it as its starting point.
From this starting point we can disentangle three primary arts of viewpoint of construction.</p>
<p>Firstly, if we are to clear the forest and build our village, we must learn how to fashion maps.
These maps should define the area over which our campaign is to be waged - the scope and extent of the work - and aid us in finding an advantageous location upon which to carry out that work.</p>
<p>Secondly, if we are to organize people to do the work, we should learn to plan and to fashion roles for them.
This means learning to identify and organize related activities into coherent roles.
Each of us may play many or even all roles, but these roles should be disentangled, described, and their duties captured.</p>
<p>Finally, we should use this wealth of captured information to achieve our own industrial revolution through automation.
Outsized leverage can be achieved only through the automation of roles.
Once a repetitive activity or group of activities is recognized and appropriately captured, partial or full automation (where possible) is a natural next step.</p>
<p>We will call these three primary arts Cartography, Biography, and Automation.</p>
<h1 id="cartography">Cartography</h1>
<p>Any intellectual activity will experience a level up when an appropriate diagrammatic representation is discovered <sup id="fnref:2"><a href="#fn:2" class="footnote">2</a></sup>. For example, <a href="https://en.wikipedia.org/wiki/Feynman_diagram">Feynman diagrams</a> replace rather large multi-variate integrals with a more compact and convenient visual representation which is more amenable to experimentation and dissemination.</p>
<p>Software is no less amenable to pictographic capture. The basic building blocks are the <a href="https://en.wikipedia.org/wiki/Flowchart">Flowchart</a>, <a href="https://en.wikipedia.org/wiki/Class_diagram">Class Diagram</a>, <a href="https://en.wikipedia.org/wiki/Sequence_diagram">Sequence Diagram</a>, <a href="https://en.wikipedia.org/wiki/UML_state_machine">State Diagram</a>, and the other <a href="https://en.wikipedia.org/wiki/Unified_Modeling_Language">UML</a> basics <sup id="fnref:3"><a href="#fn:3" class="footnote">3</a></sup>. A budding <strong>software cartographer</strong> should seek to master the widest array of diagramming tools possible including the more obscure variants like the <a href="https://en.wikipedia.org/wiki/System_context_diagram">System Context Diagram</a>, the <a href="https://en.wikipedia.org/wiki/Data_flow_diagram">Data Flow Diagram</a>, and <a href="https://en.wikipedia.org/wiki/Problem_frames_approach">Problem Frame</a>. Each of these is like a good tool and one should learn and when and what each of them is good for - a training that only experience can provide.</p>
<p>Armed with these diagrammatic tools a <strong>software cartographer</strong> is able to raise the level of abstraction at which they operate away from code and towards higher-level, more speculative, and larger scale abstractions. These are the brushes with which you will paint your works. Much like a paint brush the value of their productions largely depends on the skill, experience, and natural talents of the person wielding them.</p>
<p>The basic canvas upon with the <strong>software cartographer</strong> paints is the document. The document provides the unifying whole in which their work will hang together. It should be organized to give it a flow and a rhythm - the elements of style and grammar apply here no less than in creative writing. No small effort must be expended in learning to write documents and to write them well as these are the vehicle by which you can share and realize your higher level works.</p>
<p>To summarize: The two cornerstones of cartography are the diagram and the document. One should learn to master both and their applications in conveying the designs of systems.</p>
<h1 id="biography">Biography</h1>
<p>Geography itself is divided into sub-disciplines: <a href="https://en.wikipedia.org/wiki/Physical_geography">Physical geography</a> and <a href="https://en.wikipedia.org/wiki/Political_geography">Political geography</a>.
What we call “cartography” above maps mostly closely to the former, meaning what we here call “biography” most closely maps to the latter.
When we set out to describe a system we must make sure not to forget the people that inhabit and operate that system.</p>
<p>Surprisingly few systems are designed explicitly with human-machine symbiosis as an explicit goal - yet this is implicitly how they are expected to operate. Humans can and should be considered as components in systems - analysts, investigators, administrators, and account managers are all components in the total business sytem. This symbiosis is usually santized into the user/system dualism, but could be made much richer by thinking of the human “users” as explicit components. The means by which human components convert inputs into outputs may defy simple modeling, but in many cases the inputs and outputs themselves probably do not.</p>
<p>This is the art of biography - learning about and modeling users as explicit components in our systems rather than as sources or sinks at the edges of our systems. We can leverage our cartographic diagramming tools for much of this. Whenever we consider adding a component we should ask if it would not function better or more simply as a person. Often we can take complex components we understand poorly and make them human components until we can gain the experience to understand them better, allowing for another iterative round of refinement.</p>
<p>The art of biography involves collaborating with the human components of your system (whether they are explicitly thought of this way or not) to better understand their jobs. This might involving cataloguing and categorizing various activities to better understand their structure.</p>
<p>To summarize: Biography is the art of thinking about humans as components rather than sources or sinks at the edge of our systems, and learning to better integrate them into the total functioning of the whole.</p>
<h1 id="automation">Automation</h1>
<p>Armed with mature physical and political descriptions of our system we can now begin the next phase of abstraction - launching our own industrial revolution through automation. Our system is a set of components we understand and can model cartographically, and the people operating within that system are similarly modeled and understand through biography. Combining these two we can now look for opportunities to turn repetitive work by people into software components or poorly functioning software components into humans that perhaps function better.</p>
<p>Automation is the pinnacle of cartographic software engineering and its aim.</p>
<p>While there is much still to say here the outline has been sketched. Now all that remains is its elaboration in practice.</p>
<hr />
<div class="footnotes">
<ol>
<li id="fn:1">
<p><a href="https://en.wikipedia.org/wiki/Watts_Humphrey">Watts Humphrey</a> is an as yet underappreciated luminary in the field of software engineering. More important even than the particulars of his ideas is the attitude which underlies them. Namely, his relentless pursuit of self-improvement and belief that software projects are rationally manageable in ways that lend themselves to continuous improvement in all important areas - productivity, prediction accuracy, quality, reliability. Furthermore, Humphrey believes such “rational management” can lend consistency and quality to the works of even less capable individuals while giving ultra-competent individuals the ability to thrive at new peaks of performance. Humphrey views the exasperated rejection of method and over-reliance on self-organization characteristic of Scrum, Agile, TDD et al. as surrender in the face of the difficult task of organizing and planning software development - more political than pragmatic. The <a href="https://resources.sei.cmu.edu/library/asset-view.cfm?assetid=5259">data he collected</a> on his Team Software Process (TSP) bears this out. Software projects can be managed with the appropriate discipline and techniques, producing astounding results. Similarly for Humphrey, waterfall methods fall victim to a lack of delegation to competent and appropriately empowered teams actually capable of organizing, planning, and performing the work. They also fail to manage risk by ignoring the iterative unfolding of any complex system. What matters is technique and the individuals operating under those techniques. Recent attempts to reconcile TSP and Agile practices have been misguided - the two are fundamentally politically opposed. <a href="#fnref:1" class="reversefootnote">↩</a></p>
</li>
<li id="fn:2">
<p>Wikipedia contains <a href="https://commons.wikimedia.org/wiki/Specific_diagram_types">a large repository of diagrams</a> but solemnly notes “there is no general accepted classification of diagrams”. A fascinating research problem. <a href="#fnref:2" class="reversefootnote">↩</a></p>
</li>
<li id="fn:3">
<p>UML itself has fallen somewhat out of fashion and even traumatized some individuals with the inflated claims and overzealous totalizing of some of its early practitioners. See <a href="https://queue.acm.org/detail.cfm?id=984495">“Death By UML Fever”</a> for an idea of what happened here. It provides a cautionary tale about being overly prescriptive or enthusiastic with regards to “one method to rule them all” in software engineering. <a href="#fnref:3" class="reversefootnote">↩</a></p>
</li>
</ol>
</div>“The utility of geography in matters of small concern, also, is quite evident; for instance, in hunting. A hunter will be more successful in the chase if he knows the character and extent of the forest; and after, only one who knows a region can advantageously pitch camp there, or set an ambush, or direct a march. The utility of geography is more conspicuous, however, in great undertakings, in proportion as the prizes of knowledge and the disasters that result from ignorance are greater.”Notes On Bitcoin2015-03-29T11:29:00+00:002015-03-29T11:29:00+00:00/cryptocurrency/2015/03/29/notes-on-bitcoin<p>The following is a set of notes made while reading the <a href="https://bitcoin.org/bitcoin.pdf">Bitcoin Whitepaper</a>.
My goal was to summarize the paper at a higher level of abstraction in order to make Bitcoin more accessible for philosophical discussions.</p>
<h3 id="coins">Coins</h3>
<p>Coins are the various solutions to the same difficult, and thermodynamically
costly, problem. The parameters of the problem can be modified to increase or
decrease its difficulty. The difficulty is amplified over time to keep pace
with gains from Moore’s law. In this way proof-of-work solutions serve the same
purpose as collectibles in Nick Szabo’s origins of money article <sup id="fnref:1"><a href="#fn:1" class="footnote">1</a></sup></p>
<p><strong>SIDENOTE:</strong> The problem mentioned above is specifically that of starting from
zero and incrementing by 1 each time, finding an integer value that - when
hashed along with the blocks transactions and the hash from the previous
block - produces a value that begins with a required number of zero bits. The
required number of zero bits is increased to amplify difficulty.</p>
<p>Once a coin is mined it becomes, quite literally, its history of ownership. It
is an ordered chain stretching from the genesis of the coin in a miner (or pool
of miners) to its current owner. Each change of ownership is recorded as part
of the coin itself.</p>
<h3 id="temporality-and-the-blockchain">Temporality And The Blockchain</h3>
<p>In theory, the exact calendar dates and clock times of transactions in bitcoin
are ultimately irrelevant to its proper functioning. Though in practice part of
the validation of a black, as explained by Vitalik Buterin in the
<a href="https://github.com/ethereum/wiki/wiki/White-Paper">Ethereum White Paper</a>, is
ensuring the block has a sensible timestamp neither less than the previous
block nor too far in the future. The temporality, as mentioned by
<a href="http://thenewcentre.org/seminars/bitcoin-philosophy/">Nick Land in the 1st lecture</a>,
is a constructed tensed or A-series temporality defined by the blockchain.</p>
<p>The past is constructed as a series of blocks. Each block freezes all the
transactions that a given mining node observed before attempting to freeze the
block. A mining node is entitled to freeze and share a block when it has found
a solution to the thermodynamically costly problem discussed above.</p>
<p>There are orphaned blocks and block chain forks that can occur and further
complicate this process. There can be either temporary or permanent competing
temporal records, but the longest fork to date was resolved back into a single
blockchain in a mere 4 blocks <sup id="fnref:2"><a href="#fn:2" class="footnote">2</a></sup>. The ontological status of these abandoned
temporal records may or may not be interesting to investigate.</p>
<p>Ultimately, the temporality constructed by bitcoin is not the temporality of an
added spatial dimension. There is only one spatial dimension to be denoted by block
number. This growing block-universe <sup id="fnref:3"><a href="#fn:3" class="footnote">3</a></sup> is an ever expanding transactional
history.</p>
<p>To reduce the burden of this accumulated past Merkle trees are used to save a
“summary” (hash) of what was previously seen, allowing the rest to be safely
discarded or only accessed in depth as needed.</p>
<h3 id="relationship-between-coins-and-blocks">Relationship Between Coins And Blocks</h3>
<p>The first transaction in a block is reserved for any new bitcoins to be awarded
for solving and creating that block. Thus, bitcoins come into circulation and
are assigned to the miner whose proof-of-work of the latest block is accepted
into the blockchain first. This relationship between coins and blocks allows
the currency to be bootstrapped into existence - the first coins are mined
along with the first blocks</p>
<h3 id="overpower-the-network">Overpower The Network</h3>
<p>It’s almost hilariously impractical and non-sensical to do such a thing. You
gain little to no economic power, the only real gain is the ability to deny
service to others and potentially shut the whole network down <sup id="fnref:4"><a href="#fn:4" class="footnote">4</a></sup>.</p>
<h3 id="addendum">Addendum</h3>
<p>To summarize Nick Land’s description:</p>
<p>Bitcoin can be viewed, in many ways, as the enactment of transcendental
philosophical critique instantiated within software. Trusted third parties take
the place of the transcendental entity that is to be subtracted from the
monetary process via the critque. Buried beneath the technical details of the
paper is an entire framework for staging a critique of these institutions. Much
of this framework is implicit within in the assumptions taken for granted.
Through this lense Bitcoin is a philosophically interesting entity worthy of
further investigation and consideration. Especially as it relates to
capitalism, modernity, and the (neo-)liberal project.</p>
<hr />
<div class="footnotes">
<ol>
<li id="fn:1">
<p><a href="http://szabo.best.vwh.net/shell.html">Shelling Out</a>. <a href="#fnref:1" class="reversefootnote">↩</a></p>
</li>
<li id="fn:2">
<p><a href="http://bitcoin.stackexchange.com/questions/3343/what-is-the-longest-blockchain-fork-that-has-been-orphaned-to-date">What Is The Longest Blockchain Fork That Has Been Orphaned To Date</a> <a href="#fnref:2" class="reversefootnote">↩</a></p>
</li>
<li id="fn:3">
<p><a href="http://en.wikipedia.org/wiki/Growing_block_universe">Growing Block Universe</a> <a href="#fnref:3" class="reversefootnote">↩</a></p>
</li>
<li id="fn:4">
<p><a href="http://bitcoin.stackexchange.com/questions/658/what-can-an-attacker-with-51-of-hash-power-do">What Can An Attacker With 51% Of Hash Power Do</a>. <a href="#fnref:4" class="reversefootnote">↩</a></p>
</li>
</ol>
</div>The following is a set of notes made while reading the Bitcoin Whitepaper. My goal was to summarize the paper at a higher level of abstraction in order to make Bitcoin more accessible for philosophical discussions.A Brief Introduction to Deleuze’s Nietzsche2015-02-21T11:29:00+00:002015-02-21T11:29:00+00:00/philosophy/2015/02/21/a-brief-introduction-to-deleuzes-nietzsche<p>Human thought is immensely capable and also immensely error prone. Much of the
task of philosophy has been dedicated to rooting out these errors. No philosophy
has perhaps been more misunderstood and mystified, more prone to error, in this
pursuit than that of
<a href="http://en.wikipedia.org/wiki/Friedrich_Nietzsche">Friedrich Nietzsche</a>.
Nietzsche’s primary project can be viewed as the uprooting of a single
pernicious error and the tracing of the consequences of its correction. This
error is most easily summarized as the preference for being over becoming. By
nature this correction accuses any preference for relations of equalization or
“zero sum” equilibrium of committing a fundamental, all-too human error. Let’s
unpack these ideas further so we can clarify.</p>
<p>A vast array of Western metaphysicians have accepted the reduction of reality to
what is - to being. This view of reality is, however, immediately troubled when
the need arises to explain the passage of the present moment. If reality is
being then why is this being constantly changing? To resolve this problem, what
one might call the problem of genesis, the idea of negation and contradiction
between co-existing beings has typically been taken up as the motor of passage.
That gun turret over there is, until the missile explodes, then it is not. It
has been negated or contradicted by the missile. If we could examine the missile
itself we should find it explodes as a consequence of the resolving of its own
internal contradictions. This is the Hegelian dialectic briefly summarized, from
which one can derive all of first-order logic and mechanical physics, and for
Nietzsche it is highly suspect for a variety of reasons. Suffice it to say that
the idea of being confuses what comes last (what is) with what comes first
(becoming). It thereby opens the door for all sorts of erroneous teleological
principles (God, The Big Bang, The Big Freeze) to enter.</p>
<p>Nietzsche would like to substitute the notion of becoming in the place of being.
He recognizes Spinoza as perhaps the only other philosopher to systematically
approach this problem. We must ask along with Nietzsche, if being and negation
cannot explain reality then what can we substitute in its place? The short
answer is difference and repetition or becoming and eternal return to use
Nietzsche’s own vocabulary. Difference is the substance of reality, repetition
is its motor. How are we to understand this?</p>
<p>Firstly, we will address difference. The fundamental nature of reality is
difference. Every speck in the universe down to the subatomic particles and,
perhaps, below is different from every other. Any being (identity) we find there
is imposed after the fact by an abstraction from the particulars
(representation). Thus the nature of reality is difference, but how does
difference reconfigure itself? How does it solve the problem of genesis or
passage?</p>
<p>The answer is, of course, by repetition. Everything recurs. Differences are
ultimately relations of dominant and dominated between forces of various
magnitudes. They are differential relationships. Each force, however, is driven
to conquer those forces that dominate it and to dominate other forces. Think
force in the same sense that electromagnetism and gravity are forces. Or, more
mechanistically, this very inequality drives the evolution of new differences in
a game without beginning or end. To get a concrete sense for this we have only
to observe the differences in
<a href="http://en.wikipedia.org/wiki/Intensive_and_extensive_properties">intensive quantities</a>
in physical systems (temperature differences, pressure differences, electrical
potential differences). These differences power all the physical systems we know
of. This is, very visibly, the game of dominant and dominated forces cyclcing
endlessly around us.</p>
<p>This project Deleuze identifies in Nietzsche he carefully attempts to carry out
in Difference and Repetition. To replace the erroneous “image of thought”
associated with being with the “thought without image” associated with becoming.
The transformation of the image of thought, on a personal and societal level, is
extremely difficult. Thinking in terms of difference and repetition uproots the
forms of the thought dominant throughout the Western tradition. It therefore
leaves no easily accessible historical basis on which to constitute itself.
Humanity is left with two options - lazily circling in the comfortable but
erroneous form of traditional thought, or embracing the immense unknown of an
entirely new form of thinking.</p>
<p>More and more our species is being forced by circumstance to embrace this
Nietzschean way of thinking, though it may never, ultimately, be attributed to
him. This vertiginous change is producing large-scale resistance and confusion
in populations everywhere. To make sense of it requires tremendous fortitude in
thought, and continual effort not to succumb to the allure of traditional
thinking. Possible next directions for one interested in these new forms of
thought:</p>
<p><strong>Articles</strong></p>
<ul>
<li><a href="http://www.capgemini.com/resource-file-access/resource/pdf/Digital_Transformation_Review__No_1__July_2011.pdf">The Digital as Bearer of Another Society</a></li>
<li><a href="http://www.parrhesiajournal.org/parrhesia07/parrhesia07_simondon2.pdf">Technical Mentality</a></li>
<li><a href="http://www.parrhesiajournal.org/parrhesia07/parrhesia07_simondon1.pdf">The Position of the Problem Of Ontogenesis</a></li>
</ul>
<p><strong>Videos</strong></p>
<ul>
<li><a href="https://www.youtube.com/watch?v=0wW2l-nBIDg">Intensive and Topological Thinking</a></li>
<li><a href="https://www.youtube.com/watch?v=e_bXlEvygHg">On Cybernetics/Stafford Beer</a></li>
</ul>Human thought is immensely capable and also immensely error prone. Much of the task of philosophy has been dedicated to rooting out these errors. No philosophy has perhaps been more misunderstood and mystified, more prone to error, in this pursuit than that of Friedrich Nietzsche. Nietzsche’s primary project can be viewed as the uprooting of a single pernicious error and the tracing of the consequences of its correction. This error is most easily summarized as the preference for being over becoming. By nature this correction accuses any preference for relations of equalization or “zero sum” equilibrium of committing a fundamental, all-too human error. Let’s unpack these ideas further so we can clarify.Build Large Flask Apps in the Real World2014-10-19T17:57:00+00:002014-10-19T17:57:00+00:00/software/2014/10/19/building-large-flask-apps-in-the-real-world<p>Scaling a <a href="http://flask.pocoo.org/">Flask</a> application is no immediately obvious
matter. At plug.dj we had ~22,000 line Flask application. At my previous
employer our Flask application was significantly larger. Ultimately scaling a
code-base is less about the framework used and more about the software design
experience of the developers working on it. Scaling in terms concurrent users
also has little to do with the web framework and more to do with your
understanding of load-balancing, caching, databases, etc. That being said, what
have I learned about how to organize a Flask application to comfortably grow?</p>
<p>Firstly, <a href="https://github.com/imwilsonxu/fbone">fbone</a> and
<a href="https://github.com/cburmeister/flask-bones">flask-bones</a> are great first
approximations. If you’re struggling to figure out how to structure your flask
application have a look at those and consider using either one as a template
that you can evolve to your needs. Also, I have to mention
<a href="https://github.com/audreyr/cookiecutter">cookiecutter</a> as a tool for templating
the structure of python applications in general. In terms of the web application
itself you might also consider using
<a href="https://pythonhosted.org/Flask-Classy/">Flask-Classy</a> to build out your views.</p>
<p>Beyond that I hestitate to dictate anything else. There’s never a
one-size-fits-all solutions for complex real-world problems like this. There
will never be a substitute for thinking up-front, and deeply at that, about the
organization of your application. The first few organizational decisions will
have ripple effects throughout the lifetime of the code base. Bad decisions can
trap you into a corner. Good decisions can make previously difficult problems
much easier. So instead here are a few heuristics that I’ve used to kickstart
this process:</p>
<ol>
<li><strong>Think about deployment</strong>. How is it getting to the server? egg, wheel, rpm?
Will there be <a href="http://jenkins-ci.org/">continuous integration</a>? Are you using
<a href="http://www.saltstack.com/">salt</a> or <a href="http://www.puppetlabs.com">puppet</a>? How
you deploy your application will determine what kind of structure you need
and what kind of supporting utilites you may or may not have to write.</li>
<li><strong>Think about app initialization</strong>. Where is the entry point? How are
components initialized and shared? If my user module needs a database
connection how do I ensure that it always gets an initialized database
connection? Do I use singletons? lazy loading? dependency injection? It
depends, and you should always be willing to revisit this decision. Also
think about how you’d do a deploy to a completely uninitailized environment.
How do you initialize the database(s)? Is the app configured by environment
variables or cfg files? How are those being shared and deployed?</li>
<li><strong>Think about resource lifetimes</strong>. Make sure you understand how your
database connections and other resources should be managed within a Flask
application. Typically you should initialize a resource when a request comes
in and tear it down before the response goes out. SQLAlchemy
<a href="http://docs.sqlalchemy.org/en/rel_0_7/orm/session.html#using-thread-local-scope-with-web-applications">explicitly covers</a>
integration with web frameworks in its documentation.</li>
<li><strong>Organize by principle of least suprise</strong>. Ask yourself, “How would I
organize this so that someone using Notepad with a good grasp on the
programming language would be able to find and edit any arbitrary component?”
This is ultimately how your codebase will seem to every new person who
encounters it. For example, if you are asked to modify the function that
geocodes a location and you have no experience with a code base it’s
reasonable that you’d look in <em>app.geolocation.utils</em> as a first
approximation. You’d be suprised if instead it were somewhere like
<em>app.auth.models</em>. The first example follows the principle of least surprise.
Reduce the mental strain on yourself and others by sensibly organizing
components into well named modules.</li>
<li><strong>Think about testing</strong>. A focus on testing can help you avoid sticky designs
because they simply become untestable. Organize your tests along the same
lines as your modules so that the corresponding tests for any chunk of code
can easily be found.</li>
<li><strong>Think about logging</strong>. Bugs are going to happen and you’re going to need to
gather the information to solve them. Come up with a logging strategy that
covers your whole application and stick to it. You should be able to log data
from anywhere in any module and the logs should indicate exactly where the
data came from. In Python the best way to do this is to initialize a logger
at the top of each .py file, that way you always have access to a logger from
every module.</li>
<li><strong>Think about infrastructure changes</strong>. One of the best design heuristics you
can use is to imagine how you would build your application so that arbitrary
third-party dependencies (databases, web frameworks, etc.) could be swapped
out with minimal impact. As your application grows your infrastructure will
change. You should be able to switch databases, web frameworks, or deploy
code to mobile devices with minimal code changes. A good place to start in
figuring out how to do this is the
<a href="http://confreaks.com/videos/759-rubymidwest2011-keynote-architecture-the-lost-years">Architecture The Lost Years</a>
talk by Bob Martin.</li>
</ol>
<p>Each of these topics could easily fill a blog post on its own. With Flask in
particular (2), (3), and (6) are crucial. Flask isn’t like Ruby on Rails for a
reason. Flask is designed to be easy to get up and running. It also puts you
closer to WSGI. This, however, is a double-edged sword. It can make development
easier in areas where you know what you’re doing while also making it easy to
shoot yourself in the foot in the areas where you don’t.</p>
<p>In the end you should be aiming to design your application to depend on Flask as
little as possible. The framework shouldn’t dictate your application design, and
microframeworks in particular try to avoid doing this as much as possible.
Recently even Flask has felt bulky. <a href="http://falconframework.org/">Falcon</a> seems
like a good step in the direction of something smaller.</p>Scaling a Flask application is no immediately obvious matter. At plug.dj we had ~22,000 line Flask application. At my previous employer our Flask application was significantly larger. Ultimately scaling a code-base is less about the framework used and more about the software design experience of the developers working on it. Scaling in terms concurrent users also has little to do with the web framework and more to do with your understanding of load-balancing, caching, databases, etc. That being said, what have I learned about how to organize a Flask application to comfortably grow?A Brief Historiography of OOP2014-09-27T03:31:00+00:002014-09-27T03:31:00+00:00/software/2014/09/27/a-brief-historiography-of-oop<p>To trace a history of object-oriented programming we have to travel back in
time - the year: 1962, the land: Norway. A language called
<a href="http://en.wikipedia.org/wiki/ALGOL">ALGOL</a> is all the rage as people begin
exploring ways of programming away from the bare metal. ALGOL is quite a feat
given the technology of the time.
<a href="http://en.wikipedia.org/wiki/Kristen_Nygaard">Kristen Nygaard</a> and
<a href="http://en.wikipedia.org/wiki/Ole-Johan_Dahl">Ole-Johan Dahl</a> decided to build a
little thing on top of ALGOL they dub
<a href="http://en.wikipedia.org/wiki/Simula">Simula</a>. The first version of this
language was designed for
<a href="http://en.wikipedia.org/wiki/Discrete_event_simulation">discrete event simulation</a>
with the later Simula 67 introducing objects and classes. Now it’s interesting
that OO begins here - so let’s pause and examine this more.</p>
<p><img src="/assets/images/porphyrys-tree.png" style="float:right" width="450" /></p>
<p>Discrete event modeling is a way of modeling systems that decomposes them into
entities and events. We can already see the primordial shapes of OO within this
context but it took Simula to accentuate and really bring them out in the way
we’re all familiar with now - objects, classes, inheritance, virtual methods,
and interestingly <a href="http://en.wikipedia.org/wiki/Coroutine">coroutines</a>. Now it’s
interesting that co-routines were included and given the context - discrete
event modeling - we can see just how they’d be useful. The word object was at
the time taken much more literally to correspond with a really existing object -
Ship, Missile, etc. In such situations the entities do not represent a single
straight-line of execution, but can interact with each other, just like real
entities, in all sorts of interesting ways. With objects the idea is to isolate
these entities and their logic. Co-routines were then one way of modeling the
interdependencies between these objects. Between objects and coroutines we can
see something like a primordial version of the actor model appear. This would be
developed a few years later taking heavy inspiration from Simula.</p>
<p>It is with Smalltalk that an interesting duality begins to emerge around the
word object. The object in object-oriented programming can be taken in a double
sense. Extrinsically, an object is a structure within the software that
corresponds to a really existing entity or set of entities within the physical
system being modeled - eg. Teller, Ship, Missile. Intrinsically, an object is a
way of modeling parts of a program by dividing it up into genus-species
hierarchies in an Aristotelian
<a href="http://en.wikipedia.org/wiki/Genus%E2%80%93differentia_definition">genus-differentia</a>
style (eg. App, File, Controller). Originally both senses seem to have been at
play, and are often muddled together. Later on, however, we see the intrinsic
definition of object begin to dominate. Objects are taken up as a means to
modularize code that supplements naked functions (in languages where these are
available) and modules. They are frequently treated as an intermediate level of
modularization, though this approach takes a long time to work out.</p>
<p>Smalltalk eventually gives rise to the languages we are familiar with today <a href="http://en.wikipedia.org/wiki/C%2B%2B">C++</a> and
<a href="http://en.wikipedia.org/wiki/Java_%28programming_language%29">Java</a>. These
languages are the first to achieve wide-spread success and to heavily spread the
intrinsic view of object-oriented programming. It’s interesting to take a look
at the <a href="http://en.wikipedia.org/wiki/OOPSLA">OOPSLA proceedings</a>. Firstly we see
that functional languages have always been present but at the fringes. Smalltalk
is the clear leader early on with some penetration by C++. By 1997, however, we
see that Java has truly exploded onto the scene. This rise in object-oriented
languages and particularly in languages with the intrinsic view of
object-oriented programming seems well timed with the migration of machines
outside specialized university and military facilities and into the lives of a
wider segment of the population.</p>
<p>In the literature we also begin to see signs of wrestling with the intrinsic
view of object-oriented programming. Writing software this way provides several
layers of difficulty. Firstly, any two entities can be correlated and
differentiated in an infinite number of ways. For example, my coffee cup and a
pile of ash are related in that they’re both inanimate or both not purple. They
differ in that I drink out of a cup but not out of a pile of ash. Secondly, this
explosion of possibilities is usually resolved by falling back on preexisting
biases or prejudices about the entities involved (eg. putting only two sexes or
using sex instead of gender, etc). This puts software directly in contact with
the material and social conditions of its creation. Systems and features begin
to reflect the social relations underlying their construction. They also inherit
the contradictions within organizations. Contradictions that had typically gone
without formalization or enunciation. Power structures and political games
within organizations begin to emerge as contradictions within their software.
This gives software an organization-wide importance. The power structures that
underlie organizations are codified and formalized and therefore made visible,
at least somewhat, to programmers. As an interesting aside this would also mean
that issues within the software departments of a business provide a good
sympotomatic for the health of the organization they sit within. The software
department may even give the first signs of increasing or decreasing dysfunction
within organizations. Low quality software or frequent downtime, for example,
can be seen as a symptom of dysfunctional leadership more than, as typically
diagnosed, a software process failure.</p>
<p>As software migrates into the labor force aided by object-oriented programming
the traditional, industrial era, power struggles begin to reappear. Firstly,
object-oriented programming gives the illusion that software, if definitively
decomposed at the beginning of a project, can be subject to estimation and rote
mechanical creation since the components are already modeled. Businesses thus
seize on object-oriented programming as a way to turn intellectual products into
assembly-line constructions. Software, however, uniquely frustrates this
endeavor. As the size of software grows the failures of this approach begin to
pile up. Software consultants become frustrated by their inability to succeed.
The mismatch between traditional construction with its upfront, top-down,
piecework style and the realities of software come to a head. The end result of
these and other similar struggles with the tensions of developing software
within businesses express themselves in the agile methodologies. These documents
attempt to find a new way, outside of software and within the business itself,
to deal with the process of developing software. They attempt change by
addressing dysfunction at the organizational level, and they often suggest
collaboration across the entire organization. These practices are so difficult
to introduce into existing businesses that an entire consultancy arises
fundamentally about helping organizations remedy their internal dysfunction
enough to develop software.</p>
<h3 id="summary-and-conclusion">Summary and Conclusion</h3>
<p>This post was born out of an experiment to see if anything interesting and novel
could be discovered simply by tracing the history of object-oriented
programming. We’ve only taken a brief and incredibly provisional look. What
began as a historical outline shifted after a double-sense was found in the
meanings assigned to the word object. The intrinsic definition ultimately wins
out as it alone holds the promise of automating software development and making
large software projects more understandable to the layman by using classical
representative, hierarchical models of abstraction. This and similar
advancements on the hardware and OS side lead to an explosion in the presence of
computers within organizations. As software becomes more fundamental to
organizations it comes to embody major aspects of the material conditions of
these businesses as they must be codified in order to be converted into
software. This puts software in contact with the material and social tensions of
organizations and societies. This contact has the byproduct of allowing the
quality of software to serve as a measure of dysfunction within an organization.
Software is particularly sensitive to the internal contradictions and
dysfunctions within businesses. If an organization cannot develop stable
software then it must be incredibly disorganized internally. As this sensitivity
is recognized and code bases continue growing in size there is a consonant
emergence of software consultancies to develop RAD tools as well as diagnose and
prescribe organizational fixes.</p>
<p>I basically stopped when it appeared the article could continue growing forever.
What would naturally follow is a history of the failures/successes of RAD tools
and the progression towards modern software development techniques. In order to
get a clearer picture of these progressions and perhaps a better understanding a
more in-depth view of the discussions within software communities about their
perceived difficulties is needed. In addition there is much overlap between
software and philosophy as regards appropriate systems for organizing encounters
with the material and social world. These and more could provide fruitful
directions for future software historiographies.</p>To trace a history of object-oriented programming we have to travel back in time - the year: 1962, the land: Norway. A language called ALGOL is all the rage as people begin exploring ways of programming away from the bare metal. ALGOL is quite a feat given the technology of the time. Kristen Nygaard and Ole-Johan Dahl decided to build a little thing on top of ALGOL they dub Simula. The first version of this language was designed for discrete event simulation with the later Simula 67 introducing objects and classes. Now it’s interesting that OO begins here - so let’s pause and examine this more.Python Configurations Done Right2014-06-26T06:27:00+00:002014-06-26T06:27:00+00:00/software/2014/06/26/python-configurations-done-right<p>If you’re building web applications then I hope that you’re following the <a href="http://12factor.net">Twelve-Factor App</a> approach. If so you will be <a href="http://12factor.net/config">storing your config in the environment</a> and for most of you that means in environment variables.</p>
<p>This approach has a few benefits over storing them in yaml, ini, json, etc. Firstly, it stops you from commiting sensitive information into version control - which it kind of feels weird even saying you should be using if you’re not. Secondly, it allows your configuration to vary substantially across deployments - something it will inevitably have the tendency to do.</p>
<p>Now if you’re following this approach then the question inevitably arises as to how you’re going to execute the app with a given configuration when you run it from the command-line. The following is a simple solution invented with some help from my colleagues at <a href="http://plug.dj">plug.dj</a> (EDIT: <a href="http://rustyrazorblade.com">Jon</a> pointed out to me we were doing something similiar at <a href="http://shift.com">SHIFT</a>) - just write a simple shell script for every environment you deploy to like so:</p>
<div class="language-sh highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c">#!/bin/sh</span>
<span class="nb">export </span><span class="nv">MY_APP_NAME</span><span class="o">=</span>superslice
<span class="nb">export </span><span class="nv">MY_APP_AWS_KEY</span><span class="o">=</span>super-secret
<span class="nb">export </span><span class="nv">MY_APP_AWS_SECRET_KEY</span><span class="o">=</span>super-super-secret
<span class="c"># Finally, call whatever command comes after this</span>
<span class="nv">$*</span>
</code></pre></div></div>
<p>For those who don’t know $* is simply the
<a href="http://www.tldp.org/LDP/abs/html/internalvariables.html#ARGLIST">ARGLIST</a> - the
list of arguments passed into the application. The usage will make more sense
once you see it in action below.</p>
<p>Now we just need to hook our application into a simple system for reading
these environment variables. This would look something like the following:</p>
<div class="language-python highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="kn">import</span> <span class="nn">os</span>
<span class="c"># Configuration variable names as constants</span>
<span class="n">APP_NAME</span> <span class="o">=</span> <span class="s">'MY_APP_NAME'</span>
<span class="n">APP_AWS_KEY</span> <span class="o">=</span> <span class="s">'MY_APP_AWS_KEY'</span>
<span class="n">APP_AWS_SECRET_KEY</span> <span class="o">=</span> <span class="s">'MY_APP_AWS_SECRET_KEY'</span>
<span class="k">class</span> <span class="nc">Config</span><span class="p">(</span><span class="nb">object</span><span class="p">):</span>
<span class="k">def</span> <span class="nf">__init__</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">config</span><span class="o">=</span><span class="bp">None</span><span class="p">):</span>
<span class="s">"""Initialize config with mapping.
:param config: (Defaults to os.environ) A dict-like config object.
:type config: dict
"""</span>
<span class="bp">self</span><span class="o">.</span><span class="n">config</span> <span class="o">=</span> <span class="n">config</span> <span class="k">if</span> <span class="n">config</span> <span class="k">else</span> <span class="n">os</span><span class="o">.</span><span class="n">environ</span>
<span class="k">def</span> <span class="nf">get</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">key</span><span class="p">,</span> <span class="n">default</span><span class="o">=</span><span class="bp">None</span><span class="p">):</span>
<span class="k">return</span> <span class="bp">self</span><span class="o">.</span><span class="n">config</span><span class="o">.</span><span class="n">get</span><span class="p">(</span><span class="n">key</span><span class="p">,</span> <span class="n">default</span><span class="p">)</span>
<span class="k">def</span> <span class="nf">get_bool</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">key</span><span class="p">,</span> <span class="n">default</span><span class="o">=</span><span class="bp">False</span><span class="p">):</span>
<span class="k">return</span> <span class="bp">self</span><span class="o">.</span><span class="n">config</span><span class="o">.</span><span class="n">get</span><span class="p">(</span><span class="n">key</span><span class="p">,</span> <span class="nb">str</span><span class="p">(</span><span class="n">default</span><span class="p">))</span><span class="o">.</span><span class="n">lower</span><span class="p">()</span> <span class="ow">in</span> <span class="p">(</span><span class="s">'true'</span><span class="p">,</span> <span class="s">'t'</span><span class="p">,</span> <span class="s">'1'</span><span class="p">)</span>
</code></pre></div></div>
<p>Finally, we can grab environment variables and begin using them in our
application.</p>
<div class="language-python highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="kn">import</span> <span class="nn">sys</span>
<span class="kn">import</span> <span class="nn">config</span>
<span class="k">if</span> <span class="n">__name__</span> <span class="o">==</span> <span class="s">'__main__'</span><span class="p">:</span>
<span class="n">cfg</span> <span class="o">=</span> <span class="n">config</span><span class="o">.</span><span class="n">Config</span><span class="p">()</span>
<span class="k">print</span> <span class="s">'Welcome to {}'</span><span class="o">.</span><span class="n">format</span><span class="p">(</span><span class="n">cfg</span><span class="o">.</span><span class="n">get</span><span class="p">(</span><span class="n">config</span><span class="o">.</span><span class="n">APP_NAME</span><span class="p">))</span>
<span class="k">print</span> <span class="n">sys</span><span class="o">.</span><span class="n">argv</span>
</code></pre></div></div>
<p>Then when it comes time to run your application simply prepend the shell script.</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>user@computer:~/app$ ./localsettings.sh bin/app.py --app-arg1 arg1-val --app-arg2
Welcome to superslice
['bin/app.py', '--app-arg1', 'arg1-val', '--app-arg2']
</code></pre></div></div>If you’re building web applications then I hope that you’re following the Twelve-Factor App approach. If so you will be storing your config in the environment and for most of you that means in environment variables.Technological Criticism: Heidegger and Enframing2014-02-20T22:41:00+00:002014-02-20T22:41:00+00:00/philosophy/2014/02/20/technological-criticism-heidegger-and-enframing<p>Is technology good or calamitous? Do we control the development of technology or
does it control us? It never fails to amaze me how few technology professionals
ever approach these questions. Perhaps it is our desire to avoid cognitive
dissonance related to our work. Perhaps it is simple intellectual dishonesty or
cowardice. It could also simply be that criticism of technology is not easy to
come by for us - nowhere in the traditional canon of blogs, books, and papers is
an engagement with these questions easily found. It is for this reason that I’d
like to propose a series of articles in which I’ll share the thoughts of
influential critics of technology by outlining their basic arguments in an
accessible fashion with some of my own commentary attached. In the end the
judgment is, of course, yours.</p>
<h3 id="martin-heidegger-and-the-question-concerning-technology">Martin Heidegger and The Question Concerning Technology</h3>
<p>Martin Heidegger was the philosopher of Being. His magnum opus, <em>Being and
Time</em>, sought to re-examine what the meaning of the word is <em>is</em> and what human
beings must be like such that questions regarding their own existence are even
possible for them. He was incredibly influential in the 20th century and his
thought inspired existentialism, phenomenology, and deconstruction. His critique
of technology, originally written in 1955, was also quite influential and
inspired many ecological thinkers.</p>
<p>The argument begins from the instrumental definition of technology - technology
is a means to an end - and, after a thorough deconstruction of the inadequacies
of such a definition, works to uncover what is new about modern technology that
makes it so destructive and potentially dangerous. Heidegger tells us that there
are two ways for things to be brought forth into existence. The first, <em>physis</em>,
is through capacities already contained within the entity itself - the example
he gives is a flower bursting into bloom. The second, contained along with
<em>physis</em> in the Greek word <em>poiesis</em>, is through another entity - for example, a
chalice through the craftsman or a painting through the artist. Technology
primarily concerns itself with the latter, and modern technology does so in a
particular fashion that “sets upon nature” and “challenges forth the energies of
nature” [Heidegger] [1]. This challenging and setting upon causes us to order
the entities in our world in such a way that they are always standing ready to
be put to use - for example, the blender is always ready to blend or the
airplane on the runway is always prepared to take off. This challenging
relationship with nature also means that it is no longer viewed ecologically -
as something that we have a symbiotic relationship to - but instead as the
“chief storehouse of the standing energy reserve” to be set upon, unlocked,
transformed, stored, distributed, and redistributed <sup id="fnref:1"><a href="#fn:1" class="footnote">1</a></sup>.</p>
<p>Heidegger does not think that we are exercising our free will when we attempt to
go at nature in this way, but instead that a particular mode of revealing
entities and understanding our relationship to them has got hold of us here
since setting upon, unlocking, transforming, storing, distributing, and
redistributing are all different methods of revelation. He calls this mode of
revelation Enframing (Ge-stell in German) as it emphasizes ordering over all
else. Enframing represents an extreme danger. It opens the possibility for
humans to forget their own essence as beings uniquely capable of revealing the
world in different ways - as beings capable of revealing ever new ways of being.
More and more it causes humans to see themselves exclusively as orderers and
everything, including themselves, as orderable.</p>
<p>Despite the bleak outlook on the future of Enframing, Heidegger saw a “saving
power” contained within it as well. That saving power was its potential to
clearly reveal to us our own essence as well as the essence of Enframing, and
thereby to avoid our being enslaved to a single mode of revealing. The essay
concludes with his call to the arts to help reveal to humanity in general the
insanity of Enframing and our fundamental essence as human beings.</p>
<h3 id="commentary">Commentary</h3>
<p>It helps, when evaluating the above argument by Heidegger, to understand what he
thinks human beings are. For Heidegger, we are entities always already immersed
in a world. Our world here consist of our social conditioning, geography,
history, art, and so on. We are so immersed in it that it envelops us and we can
only in rare moments actually get a glimpse of aspects of our world itself. Our
world is a byproduct of our way of understanding ourselves. Our way of
understanding ourselves ultimately determines what kinds of societies we have,
art we produce, and our relationships to each other and our environment. This
“way of understanding ourselves” is a part of our world also.</p>
<p>Upon looking back on history Heidegger claimed to have uncovered a hidden
history of the west in which human civilizations inhabited different worlds and
understood themselves and their place within these world differently and thus
conducted their lives differently. For example, the Medieval world was one
centered around God that viewed humans and animals as His creations (hence the
name creatures). Enframing is what defines the technological world in which we
now live. Its byproducts - alienation, widespread poverty, environmental
destruction, species extinction - can be understood as results symptomatic of
our Enframing mode of revealing. If we believe these things to be catastrophic
then the question is not how to fix them within this world, but how to enter
into a new world that does not have these as byproducts of its mode of
revealing.</p>
<p>Throughout several of his later essays Heidegger leaves hints as to what he
thinks such a world would possibly be like. It is a world that would seek to
bring us into harmony with the Earth rather than desecrate it, but for which we
would not have to give up all the knowledge we have gained over human history.
It is, instead, a world based on an understanding of our symbiotic relationship
to the Earth and an understanding of our role as guardians of modes of existing.
A world that would respect what he called the fourfold - Earth, Sky, Mortals,
and Divinities. That would encourage multiple modes of revealing rather than
overemphasizing a single one as Enframing currently does.</p>
<hr />
<div class="footnotes">
<ol>
<li id="fn:1">
<p>Martin Heidegger. “The Question Concerning Technology and Other Essays”. <a href="#fnref:1" class="reversefootnote">↩</a></p>
</li>
</ol>
</div>Is technology good or calamitous? Do we control the development of technology or does it control us? It never fails to amaze me how few technology professionals ever approach these questions. Perhaps it is our desire to avoid cognitive dissonance related to our work. Perhaps it is simple intellectual dishonesty or cowardice. It could also simply be that criticism of technology is not easy to come by for us - nowhere in the traditional canon of blogs, books, and papers is an engagement with these questions easily found. It is for this reason that I’d like to propose a series of articles in which I’ll share the thoughts of influential critics of technology by outlining their basic arguments in an accessible fashion with some of my own commentary attached. In the end the judgment is, of course, yours.Nietzsche’s Concept of Decadence2014-02-20T06:27:00+00:002014-02-20T06:27:00+00:00/philosophy/2014/02/20/nietzsches-concept-of-decadence<p>For Nietzsche decandence is the symptom which led him to discover the modern illness - nihilism.
It is a Darwinian genealogical disease - not necessarily one that creates inferior genes (though with the rise of epigenetics it is easy to speculate that this may not be the case), but rather one which creates inferior beings.</p>
<p>Axioms:</p>
<ul>
<li>Some humans are naturally superior to others</li>
<li>Every superior type of man represents a type in the direction in which human beings ought to will their own development.</li>
</ul>
<p>Nietzsche imagines original human societies as ones with very clear and healthy sets of values.
Since all societies, for Nietzsche, originated in an act of domestication of unshaped roving populations by domineering individuals it was natural that such domineering individuals would create societies around those values they enjoyed.
What were these values?
Beauty, Intelligence, Health, Power, Vitality, Cheerfulness, etc.
These were the values of the pre-Socratic Greeks.
They represent a desire by humanity and human instincts to amplify all the qualities which man finds good in himself.
These are the original natural values - valued in all animals.
However, unlike other animals, mankind has the option to will these values or not to will these values.
In addition, man’s peculiar potential to shape his own future, allows him to will higher men embodying those things which he values but does not himself fully embody.
The “good” - in short, those embodying these values and with power in a society, the masters - set up and opposed themselves to the “bad” - in short, those not embodying these values and without power in society.
These were not, originally, character judgmenets or moral judgmenets, but instead merely names for attributes and social positions.
The “good” embodied the master’s values, the “bad” did not.</p>
<p>This is the original “pathos of distance” out of which arose masters and slaves.
The slaves, for a time, accepted their lot and even the values of their masters.
Slaves have always accepted their masters values, for on “instinct” alone they are clearly superior.
It was here, however, that resentment first took hold of those few lowly slaves who coveted their masters power and hated themselves.
These botched artists here sought to make themselves masters without embodying any masterly qualities.
This, however, required a ground prepared for such a revaluation.
In these men alone the soil was already ripe - for they were resentful and hateful enough to believe their own words - but the rest of the population required preparation and an induction into these values.
These first few resentful slaves sought to kick their masters from their thrones.
Under the masters values this would be impossible.
These slaves were neither beautiful nor intelligent.
Neither healthy nor powerful.
What they needed was a new, counterfeit set of values.
Ones that would allow themselves to become that which humanity desired.
If they were to be seen as such, however, some powerful new tools were needed.
Since resentment turns to hatred of the master what was needed was a new master.
This new master must champion the slaves and condemn the masters.
This new master must be above all possible human masters so that his word is absolute.
So that he can never be supplanted by anything human.
In addition, he must be singular, so that no in-fighting among masters can ever occur.
So that all slaves can be united under a single Father-like figure.
Enter God - the singular, unambiguous, supreme master over all creation.
God was the original slave invention, the first tool with which the slave revolt in values would begin.</p>
<p>Given such a new master - one which was clearly for the slaves and against the current masters - what sorts of values would he champion?
What sorts of values would he condemn?
Since he is the champion of the ugly, the weak, the sick, the stupid, and the hateful he must of course despise the beautiful, the strong, the healthy, and the smart.
He must choose the lowly slaves as his “chosen people” and command them against their masters - for they embody all the lowliest values.
Since God is all powerful he smites those who think themselves high, find themselves high in society. He champions those that are low, those that embody nothing, stand for nothing, can produce nothing, and value nothing.
Since they are the sick he must prescribe to them some health measures, but these must not be so extreme as to make them well.
For the sickness of the sick must be preserved if one is to keep ones flock faithful and desperate for cures.</p>
<p>Christianity, however, is the first to invent a whole new set of tools.
God of the old testament was merely hatred to the masters through and through.
With Christiantiy the weak come to overthrow the strong - to make even the strong hate themselves. For this, a new and unprecedented tool was need.
A psychological device which would cause the masters to hate themselves, glorify the weak, and become weak themselves.
For this, a variety of whips are needed.
The strong must weaken themselves whenever they recognize their own strength.
Become disgusted at any display of superiority, and so learn to automatically lower themselves should they ever feel raised.
For this were devised several instruments - guilt, pity, humility, sin, repentence.
A repertoire of psychological poisons which, when combined, could bring even a Titan to his knees.
However, the poor, being sickly themselves, felt drawn to these tools on instinct alone.
These were their natural tools.
With the widespread deployment of these tools begins the era of decadence.
The strong are now raised to be lowered.
The weak, raised above the tortured strong for such a corruption and sickness is, to them, no torture at all but merely their baseline.</p>
<p>Guilt - In short, self-torture over past miscalculations beyond what is reasonable.
Before, one would merely think “I have miscalculated.” or “That was stupid.” but never “I am evil”.
Pity - In short, the will to preserve all that is despicable and broken.
Humility - In short, the will to reduce all distances between man and man.
Sin - In short, the will to condemn oneself for ones own misfortunes.
Repetence - In short, the will to apologize for sin to expiate it (Requires a priest to repent to. This is important, as it sets the priest up as the only one capable of expiating the psychological torture wrought by these tools. Thus, the priest is raised above all men as the only one knowing how to heal the wounded).</p>
<p>Nietzsche names this triumph over the masterly values the “slave revolt in morals”.
The human species had found a way to will the opposite of all that was pleasing and highest in itself in favor of a leveled society in which none triumphed over others.
He saw this as the triumph of mediocrity.
Ideas at once compelling and extremely controversial well over 100 years later.</p>For Nietzsche decandence is the symptom which led him to discover the modern illness - nihilism. It is a Darwinian genealogical disease - not necessarily one that creates inferior genes (though with the rise of epigenetics it is easy to speculate that this may not be the case), but rather one which creates inferior beings. Axioms:Internalized Tensions of Software2014-02-19T06:27:00+00:002014-02-19T06:27:00+00:00/software/2014/02/19/internalized-tensions-of-software<p>Software developed in a for-profit environment is subject to two contradictory
demands:</p>
<ul>
<li><em>Use-demand:</em> On the one hand, it must be useful, maintainable, and
effectively meet customer needs.</li>
<li><em>Value-demand:</em> On the other hand, it must deliver value (for example,
profits) in as timely a fashion as possible, be subject to estimation, and
help balance opportunity cost.</li>
</ul>
<p>It is easy to see how these tensions come into conflict.
For example, to make a piece of software maintainable may require a refactor whose exact timescale
escapes effective estimation.
Similarly, delivery so as not to incur opportunity
costs may require reducing maintainability or even incurring the cost of rework.</p>
<p>Within an organization the overemphasis of either pole of this tension can
produce a crisis.
If use-demand is overemphaiszed the result is perfectionism and a low rate of valuable output.
The details are constantly being reworked.
If sustained long enough the organization starves from its own inability to produce
valuable output.
If value-demand is overemphasized the result is typically low quality software mired in bugs and technical debt.
These projects eventually become trapped in firefighting, missed milestones, and inevitably have to be rewritten from scratch or heavily refactored.</p>
<p>The choice of software development process or methodology does not abolish these
conflicts, but rather provides the form within which they have room to move.
“This is, in general, the way in which real contradictions are resolved.
For instance, it is a contradiction to depict one body as constantly falling towards
another and at the same time flying away from it.
The ellipse is a form of motion within which this contradiction is both realized and resolved” <sup id="fnref:1"><a href="#fn:1" class="footnote">1</a></sup></p>
<p>The above framework should allow us to illuminate some of the reasons for the failure of traditional methodologies.
Let’s take, for example, the waterfall method.
This method attempts to use the kind of upfront, linear, unidirectional planning typically found in industrial manufacture.
The software is designed, built, debugged, and then shipped.
The problem, of course, is that it tries to cram use-demand into the framework required by value-demand.
This necessarily implies that the balancing required to address use-demand is always deferred until some later phase.
Slow feedback means limited responsiveness and thus success of every later phase is predestined by the quality of work and omniscience of the parties involved in the earlier phases.
The demands are here confusedly, almost nonsensically mediated.
Agile methods do better by explicitly acknowledging the need for mediation between the two demands.
The ruin of agile is typically due to partial adoption - having a morning scrum without developer involved sprint planning, a global task backlog, roadmap, and end of sprint postmortems.
The morning scrum in such a scenario then merely serves as a morning status check (bordering on micromanagement), and the feedback and mediation points have been completely removed.
These sorts of imbalances are the source of tremendous waste in the software industry.</p>
<p>When not viewed as mediators of internal tensions it is all too easy for processes to devolve into their ineffective alternatives.
As developers and team leaders we need to be aware of how we are striking this balance and whether or
not it is healthy for the project.
Writing the software itself is usually not the hardest part of the software lifecycle.
Instead, it is finding a company disciplined enough to perform the balancing act required consistently.</p>
<hr />
<div class="footnotes">
<ol>
<li id="fn:1">
<p>Karl Marx, “Capital: Volume 1” <a href="#fnref:1" class="reversefootnote">↩</a></p>
</li>
</ol>
</div>Software developed in a for-profit environment is subject to two contradictory demands: