writing on a variety of things: personal matters, things that fascinate me; matters of computing, philosophy and all-around (intro|retro)spection. all in a perpetual "drafts" state. if the note is empty or incomplete, it means i'll get back to it.
september 21 2019
just started pounding so hard on my keyboard that the J key flew off. i put it back.
writing is an outlet, it’s 8:47 am in philly, i woke up over an hour ago and did some random shit. replied to emotion-induced texts i sent yesterday, genuine still.
confused by many things right now, wondering if my current emotions are just “dramatic”. what does it matter if i’m overreacting? there’s something weird about having a diary that’s essentially public, but not quite publicized. the most recent minimalizing/stylistic update to my website made it harder to share links to things i write, maybe that isn’t the worst thing.
trying not to touch twitter for a bit, it’s weird in how it offers a new “real” feeling but also takes away stuff, puts my thoughts under a gaze of which i’m viscerally aware of but still giddily perform for. i talk about a lot but there’s still very much i’m too ashamed to say.
just ordered anti-oedipus on amazon, probably the only time i’ve ever cliked the buy now button. i came across an article about guatarri’s writing process and it reminded me of how writing is an outlet, an ironically personal and cerebral activity that taps you ever closer to the “real”.
i’ve been trying to apply my attempts at writing a lot more thoughtfully to how i talk as well. i tweeted about saving thoughts until i find a compassionate way to say them.
i wonder if i’ll just go back to normal still, if i forget all the visceral lessons ive learned recently. if i somehow lose the intesity of the moment. if the (feeling of) clarity will go away after a few days/weeks/months. it’s an agonizing intensity, but maybe it’d be worthwhile to maintain this, it’s the realest i’ve felt in a long time.
i’m writing in vim right now.
still pounding hard on my keyboard.
september 20 2019
i’ve been writing a lot less recently, because i’ve been trying to write more smarter things, realer things, truer things.
the general idea behind this is that much of what i’ve been reading hasn’t quite been as smart, real and true as would then lead me to write similarly smart, real, true things.
an additional realization is that the things i (am trying to) write aren’t just a product of smart, real, true books or articles or poems, but more centrally result from smart, real, true real-life experiences. i’m not going to go into any of the typical discussions of what “real-life” is.
much of (my) life, feels like it lacks a sufficient amount of smart, real, true experiences. things sometimes seem like a game or like an immitation of sorts, things that are untrue and unreal and unsmart, things that don’t feel like the ultimate truth, or strike a real nerve or feel profound and thought provoking.
these feelings are sometimes felt in the moment or in retrospect, and they leave an ultimate anxiety for whether real things are ever felt in the future or what the trajectory of experiences is.
it’s a struggle sometimes, to constantly feel at the mercy of forces that dictate what things are real and what things are not. it’s even more of a struggle to feel that the reality of an experience might fade away as time passes, or the gravity of a past situation might only be realized after it’s too late.
it would be convenient to ignore the unreal and just focus on the real, fish out the false and be inspired by the truth, or spared from the plain dumb stuff and enlightened by the profound things in life. but there seems to be a reason for this struggle, this struggle that somehow shows up in every single bit of life.
subjectivism tells us to create our own truth, reality and meaning in life. realism tells us to ignore this struggle and take life with stride. the question of which choice is which in a particular scenario, is a hard one.
those fancy terms tell you that this struggle isn’t one that hasn’t had much thougth put into it. indeed it’s a struggle that remains ongoing, both in every individual’s mind as well as on our behalfs by those whose thought we revere.
it’s a struggle.
april 13 2019
i’m sitting on a plane back to san francisco and a weird thought came to my head about the morality of altruistic-yet-unsustainable businesses, in terms of the outcome on the workers they employ. the question in my head is: is it moral for a company with a bunch of funding/capital, to treat its workers well and pay them very high wages for their work (compared to market value), when it will only lead to the company running out of money and having to fire all its workers – as compared to the company being conservative with wages and workers’ well-being while offering relative job security?
i realized that a short period of earning a lot more money might be much more effective at bringing people to a higher economic status or out of poverty, than a long period of security in earning a meager amount while not enjoying the work as much. i wondered whether people would prefer going from one overpaid job to another that would each end in the company running out of money, or just working securely while earning just enough to survive.
the issue in the former case would be that the investment of whoever funds the business is lost… but i considered that this might not be the worst thing. after all, investments always have some risk of being lost and any amorality associated with running a business badly can’t necessarily be extended to include squandering the funder’s investment since they should have been aware of the risks anyway.
i imagine a whacky way to redistribute a lot of the wealth of the super-elites who often fund companies, would be to strategically offer a short-lived economic mobility boost to the workers employed by said companies. to break this down again, my premise is that: maybe you can overpay workers over a short period till the venture runs out of cash, helping them get off the ground, while not having to worry about paying back investors who willingly took that risk.
this is thoroughly hypothetical and there’s a number of obvious problems with it, but i thought it was an interesting set of adjacent thoughts with an intriguing idea at the edge.
i’ve been thinking a lot about some of the differing perspectives on progress (especially technological progress) that i’ve come across recently, especially in discussions on twitter. parts os
i’ve been circling around these thoughts for a little while because i’ve been trying to reconcile my desire to creatively express myself through programming with the feelings of burnout from working in programming environments with too much complexity.
one example of this was of me trying to use kubernetes to experiment with different potential service architectures for a project. i thought this was a reasonable thing to attempt to do and a valuable learning experience but i quickly got blocked by the complexity and volume of configuration i needed to even get off the ground.
i think this phenomenon applies outside of programming as well, but it just so happens that we’ve engineered our field to have a lot of unnecessary complexity and polluted the potential for programming as an means for creative expression that everyone could have access to.
Feb 27, 2019 - 1:18 AM
realizing that to some extent that i share things, not to seem “smart” or to signal anything specific, but because i feel some innate need to externalize as much of my consciousness and thought as possible
this feeling seems almost like a survival instanct in ways i don’t have all the words to explain… for one thing, it feels like profound thoughts aren’t worth having if they aren’t externalized somehow, feels like a gross disservice to myself otherwise
i’ve tweeted in the past about how the internet/computers could be what helps us live forever. the internet presents something of a though-sensitive film onto which all that which flows through out minds can be impressed upon for all eternity.
i’ve also been thinking a lot about the idea of signaling, especially as it relates to finding community by exposing one’s thought’s and feelings. i haven’t quite found the words to externalize my thoughts on this but there’s a good mat dryhurst interview that does a much better job than i could on this (screenshotted excerpt below)
Feb 21 2019
a thought/metaphor comparing actors in a rationalist/naturalist context to stateless/pure functions and stateful functions (objects/classes) in programming language theory. inspired by my “Wealth and Power” class (SOC 220 at Drexel)
(lifted directly from my class notes)
looking critically at naturalism we realize that it suggests that all incentives or driving forces on individuals come as a result of external forces and doesn’t take into account internal motivation… while in reality, there’s internal forces that actually directly determine people’s behavior, and external forces are kind of proxied via internal forces that differ from actor to actor. indeed internal forces can be influenced by different external forces that are encountered over time, meaning that the history of an actor is also relevant to how they behave.
an interesting way to port this contrast over to programming language theory is in comparing stateless/pure functions and stateful functions (object/classes) → stateless functions (agents who, according to naturalism, are only influenced by external factors) have their output fully determined by their input parameters and always return the same vs stateful functions (agents in a practical context who have dynamic internal motivations) who hold onto some internal state which might be manipulated by any parameters passed to it over a programs lifetime
Feb 14, 2019
what if, instead of exposing application programming interfaces) or “APIs”, software (especially web-based software) exposed document object models (DOMs), via which users can query for objects and attach events or manipulate said DOM… i feel like pulling this idea from the webpage level to the service level will help the web reach it’s potential as a truly object oriented system
i think this idea is really very immense and powerful, imagine being able to utilize an API such as querySelectorAll is used to dynamically determine how a resource can be accessed and manipulated. the object model that exposes such a resource can also be dynamically modified to accommodate whatever queries are being made to it.
Feb 06, 2019
why do programmers have a tendency to be so confident about our abilities to complete tasks or build certain things? it seems to have something to do with the fact that the skill of programming is less about knowing how to do specific things, and more about knowing how to find out how to do those things. anyone who’s done a decent amount of programming has at least the basic skill set of perusing tutorials or documentations and finding necessary libraries to complete the tasks assigned to them.
while this kind of confidence is perfectly logical especially amongst relatively inexperienced programmers, i’ve come to be rather wary of it, because it tends to lead to aggressively glossing over details and optimizing for substandard results that “just work” over a cogent delivery. additionally, i’ve learned through my own experience, that this kind of confidence has the unfortunate aspect of having programmers rush into unclear projects and poorly-defined implementation paths that only leave them burned out. in hindsight, the most useful type advice or caution i got from mentors during my internships, was around taking a step back and thoroughly considering whatever plan (or lack thereof) in my head for accomplishing a task, lest i get burned by it. one of my mentors at coinbase would always give a drawn out “are you sure? i’ve been burned by that before”, whenever i suggested an library or tool for implementing something, which at the point seemed limiting but probably saved me from a lot of pain. indeed, more experienced programmers tend to have more of this kind of caution than those with less experience.
further, this type of caution-over-confidence attitude is one that really helped me understand the value of my creative time as an engineer, because productivity is useless if being applied in the wrong direction. i’ve been gradually working myself out of a state of burnout from the intensity of my two internships in 2018, and as i’m getting back into the grind of creating, i’m taking serious considerations about how to better utilize my time in how i pick out tools, and how i iterate on projects.
Jan 24, 2019
thinking about how much of our interactions with computers are restricted to (touch)screens, keyboards and trackpads… seems often that a computer is only as interactive as the output surfaces and input dimensions it has; an “interactivity surface area” if i may.
i’ve been pretty fascinated with dynamicland’s approach to computing and realized that the most appealing part to me is the near infinite interactivity of the system, and it’s nove approach to hybrid input-output devices using the projectors and paper with colored borders.
in a sense, we’ve been restricted towards screens and keyboards because we tend to define our computer systems around the interactivity models we have or seem practical, instead of desigining interactivity modes around our systems. to be fair, there was definitely a time when mouses and keyboards were cutting edge, but as there’s been an expansion in what we can do with computer systems, ther hasnt been a corresponding expansion in the physical interfaces/interactivity modes through which we can do these things.
i suppose this is why VR/AR seem promising, but there’s the worry that headsets will never quite be as tangible as we want because of the need to opt in… advanced projectors seem more ideal to me for this.
Jan 23, 2019
some notes from a conversation with a friend a few months ago
Art vs Artist. Do we separate the art from the artist? How about we think about it in terms of elevating the art instead of the artist, and understanding that doing one might lead to the other and vice versa (depending on certain criteria of course),.. it’s all about balance. E.g. elevating the music of a pedophile is very likely to elevate his personality. . Art can be elevated boundlessly, but the artist cannot be elevated beyond humanity or the law and absolved of their transgressions. Hence, due to the fundamental connection to the artist, there is a bound placed on the art due to this. Because the elevation of art is not a concrete thing. There’s a place for elevating the artist in service of the art.
Jan 07, 2019 [incomplete]
i used polymer a bunch during my first internship at google and whad gotten familiar with it a few months before because ethe promise of a native API for web components seemsd really appealing to me.
in using the tool (roughly version 1.x at the time, 2.x was still in infancy), i dealt pretty closely with a number of different dynamics to how it worked, specifically in the contexts of state mtransfer through data binding and events. polymer’s general philosophy was to create an API consistent with the workings of native web components (
Jan 06, 2019
answer me this
Jan 06, 2019 let’s consider a field to consist of a series of pillars, with some pillars serving as the basis for other pillars. these pillars can be considered to be abstractions
(I’ll do a little drawing here to illustrate it)
for the field to be coherent, the relative height of the topmost pillars (abstraction layers) must be very similar. this way the
Jan 06, 2019
i’ve heard a lot of alan kay hype recently, and while turing awards are great and all, i feel like the thing that really got me into his cult of thinking is his emphasis on “computing as a medium”.
medium is a really broad term, but it generally describes a sort of ether through which certain things (let’s call them “processes) occur. but not these processes are not the medium, nor are they distinct to the medium alone.
Jan 04, 2019
this is a weird idea that i think is worth exploring in how we thing about computer programming. i typically program with two primary tools, vim and the bash shell, with vim kind of being nested inside of bash. the vim tool is used to edit plaintext program files and the bash tool is used to do larger operations such as compile or run individual code files. more significantly, bash is used to do “scripting” operations that call individual program files (or binaries compiled from them). both the operations specified in text files with vim, as well as the operations specified on the terminal with bash, are part of programming but are generally treated somewhat differently. directly editing text files that describe computations to be compiled and/or run later represents something of a “layer 1” programming, while the process of using these layer 1 programs or doing general operations (as enabled by the larger operating system) serves as an encapsulating “layer 2”.
the Unix philosophy of programming is somewhere where this contrast is best illustrated, and generally serves as an inspiration for my thinking of this. but other programming systems and models, in more specific domains show how this principle manifests. a good example of this is ML programming workflows which usually consist of the distinct layer 1 process of specifying model architecture in tensorflow/pytorch or something and the wider layer 2 process of actually training the models or deploying/running them in different environments.
additional note as at Feb 04, 2019: i was reading about Ivan Sutherland’s work with sketchpad and the field of constraint-based programming as a whole and it feels like there’s some interesting mappings between this and the layer 1 - layer 2 metaphor i’m trying to develop here. constraint-based environments involve the two distinct processes of defining constraints on the behavior of a dynamic system and then creating and experimenting within this constrained environment. perhaps thinking of regular coding as setting constraints and scripting as the experimentation aspect, gives more solidity to this metaphor and shows more parallels in how dual-layer programming processes can be used for a more powerful experience. for instance, the direct visual manipulation presentation of sketchpad gives us something of a hint as to how to build a more intuitive programming experiences.
this constraints idea is pretty vast in how it applies to creative processes. abstractions are fundamentally constraints we apply in programming. different paradigm such as object or function orientation; static or dynamic typing are constraints that give us mroe accidental successes (see: https://vimeo.com/242081961). will maybe talk more about this stuff. constraints often imply learning/setup curves, but they’re often very useful. indeed, what is a system but the constraints in which it exists and operates
Jan 03, 2019
i’ve been thinking about the huge number of duplicated software products that share many user or programming interface specifics but are largely non-interoperable because of whatever propietary boundaries or the lack of a common file format.
feels like this kind of lack of interoperability for identical interfaces is very much at odds with the ideal state of computing
hoping that, in the fight against needless software complexity, we can find a way to infer bridges in synonymous interfaces and break our slavery to APIs
Jan 02, 2019
some thoughts set off by this interview with Alan Kay: https://www.fastcompany.com/40435064/what-alan-kay-thinks-about-the-iphone-and-technology-now
Computers as “wheels for the mind” — steve jobs
Dec 24, 2018
i want to isolate some key elements of why certain apps are better suited for exploration than consumption. i’ll use a few different apps i use as case studies on this: twitter, are.na, instagram and spotify (trying to do non social-centric apps as well)… might also try netflix, esp. as contrasted with tradtional cable TV.
it’s worth noting that explorative vs consumptive nature of apps is less of a binary and more of a spectrum e.g. an app can be 20% explorative and 80% consumptive.
here’s some of the key elements i’ve got so far:
exploration oriented content incentivizes more linking and reference sharing, with Are.na being a prime example of this
there’s a bit of a blurred line here though, some of the activities provided by apps don’t cleanly fall on either side. a good example is twitter’s retweets and quoting (or “retweet with comment”). where an RT could be a random reaction or a share sans context/endorsement and alternatively could be a sort of linkage for use in other discussions
the netflix vs cable contrast is interesting in that netflix’s dominance can be largely ascribed to it’s giving users the ability to explore their content instead of just passively consuming it according to channel schedules in their TV guides. i think this is a prime example of a pattern of disruption in the content space; services that enable a transition from passive consumption to more active exploration and content discovery.
Dec 11, 2018
will get back to this
Dec 11, 2018
searching for the ideal
operating system medium
Dec 10, 2018
perhaps the fundamental difference between capitalism and (whatever the alternate, ideal system is, we’ll use “communism” as the proxy term for this) is that capitalism only amplifies the desires of those in power/at the top of the pyramid, while communism is aimed at highlighting the desires of those at the bottom of the power pyramid that is society
Dec 05, 2018
Experimenting with voice notes
Part 1: relative lack of composability in English
Part 2: relatively non-intuitive composability of English words
Dec 03, 2018
“understanding the flow of money (national revenue/expenditure) is useless outside of the context of power structures through which it flows”
I’ve been fascinated for a long time with two projects by nigerian techies: yourbudgit.ng and tracka.ng, both spearheaded by Olusegun Onigbinde. BudgIT provides detailed analyses of budgets put out by various arms of the Nigerian government, and Tracka analyzes and crowdsources feedback on public projects in the country. These projects championed an ideal of education and radical transparency on the state of the nation.
a google keep note i wrote over a year ago
Both these projects fit into ideas i’ve had around providing detailed exposure to the country’s electorate on issues that affect them. But i’ve long felt that there are some missing pieces in their approach which i summarized in the opening quote.
I propose a tool, in form of a graph of all indivduals within the nigerian political system that is annotated with revenue/expenditure data from both BudgIT and Tracka. I think this could help better illuminate the power structures in our system and give a more thorough education to the electorate.
Dec 03, 2018
just as whiteness constitutes an umbrella identity that assimilates an oppressive, capitalistic ruling class, I would argue africanness encompasses the global proletariat, oppressed, indigenous class.
Dec 03, 2018
In worrying what other people think about me, I seem to have started to make harsh projections unto myself. Learning to be selective with the projections I internalize
Nov 30, 2018
lemme first describe what I mean by “flawed standards”: strongly accepted principles of industries or fields of thought that are (reveal themselves to be) fundamentally flawed constructs, especially over a long time. this concept is further characterized by the resulting condition of it, that adherents of such a field of though will ignore individual manifestations of its flaws due to their belief that such a standard is perfect, treating such flaws as merely isolated inefficiencies, rather than a fundamental flaw — but still two equal possibilities for such a standard.
it feels like a robust way to tell the difference between these is to identify patterns in flaws over time… it’s like trying to tell if a few minor rips/holes on a piece of cloth are positioned to form a big rip.. or just make the cloth a little more porous
Nov 29, 2018
Feels like an arrogant statement frankly sometimes (in the generic sense at least, not just in the context of software)… Humanity has long been plagued with the inability to predict the longterm, so why are we all of a sudden sure that what we’re building is actually going to have impact on the future?
Nov 28, 2018
(placeholder for now, continuing on ideas from decentralized naming)
we’re gonna need a decentralized “google” if the p2p web is really going to work.
Nov 28, 2018
placeholder: some musings on the relationship between education and class in the 21st century
Nov 27, 2018
(Placeholder for notes on the act of “checking in” on friends)
Nov 26, 2018
what’s the difference between trustful and trustless systems, in the general context at least:
i would say that there’s no such thing as a trustless environment? simply environments where trust does not accumulate,.. as opposed to trustful environments/systems where previous trust from successful interactions accumulates and persists across interactions.
let me give an example:
jack has had jill deliver packages for them* in the past. the first few times this happened, jack had to verify that jill actually delivered the package because they had no trust in jill. subsequently, due to delivered packages
a counter example in a trustless context:
jack has had jill deliver their packages multiple times. jack doesn’t actually know who jill is and never interacts with them directly. so jack always verifies the package delivery because the deliverer is assumed to be different every time. there’s no chance that jack build trust for jill because they don’t interact with/know each other and, in essence, don’t trust each other
(* jack and jill are non-binary and use pronouns “they/them/their”)
the pertinent question that arises from this framework, especially in the distributed computing context is: is there a realistic extent to which trust can accumulate as a result of a track record of interactions between entities? the implication of this idea could be optimizing verification in blockchains (or other distributed) systems by simply not doing it based on past verifications.
of course there are limits to trust in any context, and it only provides us with a probability of verification based off past interactions, but this probability could be useful nonetheless.
this is all very abstract thought… i was wondering if i could adapt ideas around accumulating trust in human interactions to distributed computing questions… because it seems the Byzantine Generals Problem didn’t account for the existence of accumulated trust between generals due to fighting alongside each other in previous wars.
perhaps it would be useful for blockchains to also codify this concept of accumulated trust?
follow up as at Feb 10, 2019:
Just read this:
Nov 26, 2018 // placholder for now
the idea that evolution, in a variety of contexts, can be either cyclical or acyclical
i wonder if there’s a field/sub-field in academia devoted to analyzing different evolutionary patterns and identifying which of these 3 categories they fit into
Nov 19, 2018
what is the world but the sum of our full range of perceptions… at some point, we’ll figure out how to digitally “simulate” the entire world, and if we do it right there really isn’t going to be a difference
Nov 19, 2018
just a theory that has been sitting in my head: the problem of being able to absolutely locate a place/resource/anything really is distinct from the problem of creating a parseable/human-readable “address” for said thing.
prime analogy for this is the fact that (37.422160, -122.084270) is different from “1600 Ampitheatre Parkway, Mountain View, CA”. similarly, and this is what i’m trying to get at, it’s necessary for decentralized systems like the Interplanetary File System (IPFS) storage network to have a name service system (IPNS in this case) running alongside it to identify resources outside of their absolute addresses, same thing with ethereum and ENS or, classically, WWW and DNS.
i’m wondering if it’d be worth the effort to create standards around decentralized naming and formalize abstractions/tools to simplify the creation of decentralized networks. i suppose there’d be a demand for this as more and more of the internet is accessed via decentralized networks and more services are hosted in a decentralized manner.
to give some background, about a year ago i was really fascinated with the idea of upspin.org with it’s distributed namespace platform (kind of like a global filesystem with an intuitive permissions model). but there was the fundamental problem of their nameserver’s being hosted with google, which kind of undermines the whole “decentralized” thing.
perhaps some kind of dedicated protocol for providing distributed nameservers for different distributed services could be a major stepping stone in web3.0
i think solutions like handshake (https://handshake.org/) are aimed at solving this and i’m excited to see how that turns out.
Nov 19, 2018
I really like the term “Operating System” because of how flexible it fundamentally is. It’s something that transcends well outside of computing and represents, in my opinion, a sort of intermediary or translator between humans and systems which provide a series of capabilities.
With this fundamental property in mind, i think there’s much to explore around the gradual evolution in what we consider to be an operating system for computers, as well as the different intermediary systems that essentially fulfil the role of “operating system” in different facets of our lives. For example, Amazon is gradually fulfilling the role of operating system for human consumption, kind of like an interface between the everyday person and the capitalistic consumption complex… providing the digital and physical storefronts for all the tangible and intangible things we consume (see this note).
Dec 10, 2018
This raises a question what are some other things for whch operating systems exist or need to exist? What about an operating system for creation or education
Dec 13, 2018
Here’s something else: i just came across a reference to an operating system (in the context of computers) as something that dues two fundamental things: providing hardware abstraction and multiplexing resources → how does this idea hold when expanded into non-computing contexts? what systems provide abstractions over the bare-bones activities in a context and help us manage our resources (monetary, cognitive e.t.c.) and distribute them amongst tasks or processes?
Nov 16, 2018
Keeping friends from your childhood is hard. Complications of time or distance often make old friendships disappear if you don’t put in effort. One thing that’s been on my mind today is the fact that relationships that span long periods of time, especially those that traverse the transformative adolescent years, have to undergo a similar transformation if at all they are to persist. Perhaps, every decade or so, it’d be wise to intentionally reevaluate our longest friendships to see if they still hold despite individual transformations, as opposed to passively interacting amidst the friction of growing incompatibility.
Nov 14, 2018
file this under stupid ideas
should start by noting that i personally believe that the fundamental solution to prisons is to abolish their current form totally. that’s a bit of a lofty goal so i’m going to try to articulate some ideas around how crypto could really fulfil its potential of building an open and inclusive financial system with prisons & ex-felons.
Thoughts on the topic from a couple different angles but mostly through the lens of computing
Nov 13, 2018
Question: what is the fundamental difference between virtualization and abstraction (in the context of computing at least)? I would say, and this is probably total bullshit, abstractions serve the purpose of making things more understandable to humans, while virtualization makes things more understandable to the computer
Nov 13, 2018
In what ways does the internet and the digital realm enable greater (productivity through) asynchrony of our actions/processes?
We are able, for instance, to keep from being distracted from work by ordering food online in advance. Perhaps the fundamental thing that companies like amazon have done most effectively to optimize our consumptive instinct is remove the bounds of asynchrony of said consumption.
Dec 15, 2018
There’s the downside to this however: greater asynchrony in our communication almost imposes an unnecessarily long feedback loop, one that’s often rather crippling and reduces the overall quality and throughput of individual threads of communication.
Nov 13, 2018
the idea that traits or actions that give you leverage in certain spaces might not do the same in others, or even do the opposite. currently thinking about how that manifests in school: not all “learning” activities you partake in might actually affect your GPA, noone cares (for the most part) the papers you’re reading on different computer architectures if you’re in a course studying only basic examples.
slightly different example is from sociological studies of youth in marginalized communities: means of artistic expression such as rap music or graffiti might be viewed as delinquency in a traditional punitive social heirarchy but equivalent expressions from youth in more equitable communities is lauded.
Nov 13, 2018
there’s probably some quality thoughts here or some illuminating distinction to be made. will revisit.
Here’s a series of thoughts sparked off by this article which takes aim at Amazon for “eliminating all the bulwarks against consumerism” and effectively dissolving the human experience into one of perpetual consumption.
Amazon, in my opinion, is aiming to position itself as the operating system for human life. It fails to do this in the sense that it only fulfills the human instinct to consume while neglecting the corresponding (and equally essential) instinct to create. Perhaps what’s missing in the current scope of mega company/service is one that aims to build a thorough framework and operating system for boundless creation by humans.
The referenced article makes a good point as to why this emergence
another thought as at Nov 11, 2018
i wrote a few college essays on the issue of the gap between technological advancement and increasing human productivity since the turn of the century.. id argue that this same imbalance in the sophistication of our means of consumption vs our means of creation/creativity, is at the heart of this discrepancy that has economics so frightened — technology has advnaced to make us better consumers but not necessarily better creators
Nov 01, 2018
(Placeholder for some in-development thoughts contrasting what it’s like to move around Lagos vs Philly or SF, and what balance of intuitiveness and guidance makes for ideal urban navigation. Also some thoughts about urban mobility in general)
Fundamentally, people don’t think in terms of distinct philosophies, but instead in terms of various socializations they’ve been subject to. So trying to analyze human systems in terms of philosophies instead of socializations is futile. By extension, communities or demographics are not goverened in behaviour and circumstance by their philosophies but are instead at the whim of the sociologies they subscribe or are subject to.
In many ways, we can think about computer platforms and applications as tools built upon other tools. The whole essence of the stack of platforms and protocols we have is tools on which other tools can be built that, themselves were built on tools. It’s often important when analyzing platforms and applications to understand what their relative position in the stack is and the immediate and/or essential neighbors (i.e. if platform/protocol A wasn’t below in the stack of application B, would it matter? e.g. webapps require HTTP but don’t directly depend on the software architecture of the client device, browser serves as an abstracting platform in this case)
Oct 22, 2018
Having clarity in absence of meaning (?)
Editor’s Blurb: As Director of the Office of Scientific Research and Development, Dr. Vannevar Bush has coordinated the activities of some six thousand leading American scientists in the application of science to warfare. In this significant article he holds up an incentive for scientists when the fighting has ceased. He urges that men of science should then turn to the massive task of making more accessible our bewildering store of knowledge. For years inventions have extended man’s physical powers rather than the powers of his mind. Trip hammers that multiply the fists, microscopes that sharpen the eye, and engines of destruction and detection are new results, but not the end results, of modern science. Now, says Dr. Bush, instruments are at hand which, if properly developed, will give man access to and command over the inherited knowledge of the ages. The perfection of these pacific instruments should be the first objective of our scientists as they emerge from their war work. Like Emerson’s famous address of 1837 on “The American Scholar,” this paper by Dr. Bush calls for a new relationship between thinking man and the sum of our knowledge
Oct 16, 2018
In terms of standard ways of organizing tasks and setting goals, I think it’s important to make some key distinctions between how one does such on a personal project vs one that involves working on a team. There’s a plethora of methodologies for working in teams (especially in the software industry) that work well in the multi-person setting and might help organizations scale. However, i would argue that trying to re-apply these same philosophies to the singular case is futile. In short, there needs to be more scholarship and documentation on how to work on your own. Perhaps the reverence attached to lone builders/inventors comes from the general lack of consensus around what it takes to do so. Perhaps it’s also because theres generally more variance between one individual and the other and different groups, in terms of work, that it’s harder for such a consensus to be reached.
All of a sudden that phrase has new meaning for me
I’ve almost always been of the opinion that i’m incredibly lazy. I’m very selectively able to exert myself in different situations and often find myself
Dec 12, 2018
currently learning haskell, starting to collate resources and reference code.
interested in building/implementing distributed systems and blockchain tools in haskell, so the resources are largely oriented towards that
Oct 11, 2018
Digital (Platforms) vs Virtual (Reality)
Hear me out here
Digital constitutes an abstraction of the tangible world
Virtual constitutes an imitation of the tangible world
Virtual reality (or at least what we’re trying to achieve with it) constitutes a marriage of interactive abstractions (digital interfaces) and imitations of the tangible world.
Projects can fundamentally be of arbitraty size depending on their end purpose or whether they might be considered as just a prototype or MVP or a finished project — it’s all about the requirements of a project at a specific point in time.
On this note, traditional wisdom dictates that we understand the kind of manpower that is needed for such a project. I would argue to the converse: according to my experience, what is needed for a project (it’s requirements) should fit the manpower and concrete man-hours that are available for it. Projects sizing at a particular point in time, must depend on this combination of resources of the maker(s) of said project.
Sole makers must understand how to size thier prototype/MVP projects according to their individual ability to commit time and effort towards the project. As a project becomes bigger or higher stakes, (e.g. a MVP project gaining users and monetization potential), one must commence the delicate balancing act of manpower and project requirements — increasing the former to fulfill the latter as the latter is increased to achieve further project goals (of profitability or user growth, for instance).
Sole makers are often also encountered with more uphill work while creating projects and have to hold themeslves accountable for keeping focused as well as separating plain imaginary work from the uphill work involved in gaining certainty for the feasibilty of a project and getting through all teh so-called “set-up bugs”.
In my experience building things on my own (and mostly failing), I often try to assign myself tasks that would normally be distrbuted on a team of programmers. I found myself at a point when I decided I was much to weak a programmer or was lacking resources to drive a project to completion on my own, and decided to save my energy for whenever I was able to find a team. At the core of this, i was setting unecessarily high expectations for my own projects and assigning a number of pretty individually difficult tasks as a result. I would convince myself that i could handle them on my own but would just end up burnt out with a half completed project and a dearth of confindence in my ability to execute. The thought above
Identity is a personal thing, while classification is external.
However, when people say things like “don’t let your identity hold you back”, thay are committing the logical fallacy of conflating identity and classification. The quoted sentiment often does this while ignoring the effect of oppressive classification of groups as components of what holds them back — they instead focus on inconsequential classifications amidst social groups that might cause individuals to not believe in themselves.
The underlying premise of this, that identity can be influenced by classification, is valid. But the assumption that people’s inherent or self-determined identity is what warrants certain classifcations is frankly dangerous. To illustrate: the assumption is often made that young black men are inherently rowdy or dangerous hoodlums, and that this classification warrants their being intensely policed. Simple sociological intuition dictates that this is indeed never the case, and that people and their resulting characteristics are by and large a result of the environments in which they exist, in this case: the societal neglect that black communities have long been subjected to.
Dec 11, 2018
In fact the
modularity is about abstractions, composability is about interoperability
modularity involves dependency, composability abhors dependency
modularity is a sub-property of composability
extensions on this note:
Sep 06, 2018
Notes on this topic from multiple angles including mathematics, programming languages (both control flow and memory interactions), software applications, operating systems, e.t.c.
w.r.t. programming languages, Haskell (Functional languages in general) is the prime example of a language built with the idea of composable computation in mind, hence the concept of a monads typeclasses, the atomic unit of composable computation, which encapsulate the result of combining (binding) operations.
what makes good composability possible in languages like Haskell and SQL is the use of (or robust support for) atomically transactional operations. i would argue that the parallel concept to monads in SQL is the transaction, which encapsulates an atomic operation on persistence (see ACID principles). A transaction can be built out of a number of other atomic operations but its result is indivisible (i.e. no discernable intermediate states) and deterministic given the same constituent sub-operations
— perhaps i should also be clearer on what i mean by atomicity: the smallest individible unit of something, in this case a computation.
in terms of control flow, composable operations are characterized by having one entry and one exit, in that they can be used as a statement anywhere in a program without disrupting the control flow, leading to minimal structured control flow (personal note: see the parallels to this and what you were trying to implement in artisan 1.0). this idea originted from research by Böhm and Jacopini in 1966 on eliminating “goto” statements from programs, then replacing them with “if-else-then”s and “loops” (see wikipedia section).
the concept of Monads from functional programming (also called “workflows” in the f# community), is essentially another take on this principle that lifts directly from category theory (pun intended). monads serve as a way of accounting for and managing side effects in otherwise pure and composable operations that increase the number of/change the types of the outputs — an error (sideeffect) handling technique in essence.
An idea around this → Software Transactional Memory: works by, instead of acquiring locks on modifiable data, it makes modifications and does an after-the-fact check for conflicts on the data, aborting and retrying if any are found.
personal note: the fundamental difference between transactional computation and regular computation is that transactions consider failure to be a feature and not an anomaly
something else, i’ve culled from the “universal composability” wikipedia page: universal composability in terms of turing machines :
“The computation model of universal composability is that of interactive Turing machines that can activate each other by writing on each other’s communication tapes”
new thought as at Dec 31, 2018: since Turing machines and Lambda calculus are both equivalently agreed upon models for universal computation, I wonder whay the “computation model of universal composability” in terms of lambda calculus
… i’m thinking of how to be able to string together applicational funcionality without any care for what the applications are doing. I think two core brands of composability are relevant in this: functional composition and object composition. my intution tells me that what could be achieved with this would involve functional composition through object decomposition (this is kind of meaningless but i think the phrasing serves as a good pointer to the mental-memory location of the mental model i’m building around this). the essence of this is to remove dependency of applications by decomposing relevant data in objects in between function passes so as to remove the need for linkable signatures. object composition/decomposition points to the idea of being able to compound or filter data from 1 or more sources so that individual functions only recieve info that’s relevant to them.
GraphQL seems like a promising tool that could be extended to implement a common API layer for these kind of applications as it has composability primitives built in. Marius Eriksen’s “Your Server as a Function” paper proposes three key primitives for a composable system of services “Futures”, “Services” and “Filters” (partly inspired by the UNIX philosophy) that is implemented within Twitter’s microservices and account for the distributed, asynchronous and concurrent nature of web systems (albeit formulated for the internal services case). Further tying a querying and query-composing layer like GraphQL to system primitives as proposed by Eriksen fulfils two key things: 1) offering a common public query layer to a system designed for internal services and 2) building a distributed, cross-services framework to enable composable querying on the frontend. The finagle runtime library has the concepts around Eriksen’s paper encoded in them; the main challenge will be in stretching these concepts to the public-service case.
Expression based languages (where everything returns something, as opposed to statement-base ones) is a cornerstone of composability.
Functional Progamming vs Functional Applications
new section as at Nov 26, 2018
The essence of functional programming, and category theory on which it is based, is composability. I’m beginning to be of the opinion that the principles of composability, as exemplified through concepts like Monads, would be more useful if adapted to the application level as opposed to just for programming languages. what i envision is something similar to the unix philosophy in terms of both seperation of concerns as well as flexible composability via constructs like piping and things of that nature. in simpler terms: a haskell-based shell (“hash”, if i may :p), where composabililty is enforced at the inter-application/executable level instead of intra-application or code/logic level. haskell as a language has proved notoriously dense to understand and i think this is because there is limited necessity for powerful composability at that level, and instead, we should be focused on making individual executables small and single-responsibility, while baking in more tools for composing their functionality into the shell.
Expression-based languages, such as haskell, where everything returns something, as opposed to statement-based ones is a cornerstone of composability.
Also found this page interesting: https://www.haskell.org/arrows/index.html
new section as at Nov 13, 2018
interesting tech/research projects exploring composability primitives at different levels of software stacks
“Work” here is in terms of concrete projects/sequence of tasks that require intellectual exertion, such as an open source project or an MVP app for a startup idea.
There’s the concept of Uphill vs Downhill Work  that is important to be congnizant of. Uphill signifies the work that is necessary to validate a project’s feasibilty. Downhill work is that which is done to drive a feasible project to completion.
One must figure out how to place key tasks in either of these categories. More pertinently, one must understand which tasks could actually be described as uphill tasks as opposed to mere conceptualization that has no place on the graph (lets call this flatline work), as there is nothing to show for such work.
A good rule of thumb for differentiating such things is that uphill tasks involve actually getting into the implementation of things i.e. writing code (that runs), building out mocks or collating research.
Ideas from Paul Graham’s 2003 blog post Design and Research  **teach us that the process of design and doing uphill work hold many parallels. According to the post, good design is an iterative thing which involves:
The ideas around iterative work and collating learnings from each attempt at things is also a practical way to overcome perfectionist indecision in the process of building things. “Perfectionist Indecision” is something i’d describe as the inability to settle on which one out of a set of possible implementations of a certain thing (be it database library or continuous integration service) to use because of perfectionism.
Singling out a candidate implementation/approach and going in on it is the best way to find out if it could actually work better than the others; if any obvious roadblocks are identified in this process then a new candidate approach can be chosen based on the learnings, after all it’s you’re always working on unfinished prototypes anyway.