Sunday, March 31, 2024

Class struggles (v2.0?)

Commentary and thoughts on: https://www.youtube.com/watch?v=kNUNR2NZvFM

This is the clearest explanation I've seen of what's happening in Western economies.

And I broadly agree with the narrative.

In a sense it's basically what I was trying to articulate in https://www.facebook.com/watch.koi.expert/posts/pfbid02LmpTrGv8YkVV5gXFmWrebVDeKWhbwpMaSdVGGrP6uWX51Z9pVcT5jKS1Z87ReAdQl 

And it's kind of why I think despite the narrative being spot on, it doesn't really apply as cleanly as the narrative would lead you to think.

"Western" (particularly Americanized) societies run on consumerism, where ostensibly rich people often blow their fortunes on stupid things and their culture seems to encourage binge spending where even millionaires could somehow spend their way into bankruptcy. This is kind of an alternative "tax" on the moderately rich, or even the spendy ultra rich (cf. Donald Trump).

So in fact the situation he describes where there are two tiers of people, the rich who firmly hold onto their wealth, and the common people who spend most of their money on living expenses, is not seen as much in the West as in... uh... Hong Kong?

At least in Hong Kong the rich can be very frugal (relatively speaking), and the story about assets skyrocketing has been true for pretty much the past 40 years. I haven't done any qualitative studies but that feels generally true.

There's also the issue of whether there's anything "interesting" to invest at all. In developed economies outside of US and China, there's really literally nothing that could use so much monetary investment. But the interesting thing is that in the USA, there's this funny thing called AI, which is truly a resource (capital) sink, where it could suck up as many billions as you could throw at it. Of course this is also true for China as well. In theory any developed country can develop AI tech, but I don't think anyone would want to invest anywhere else.

The AI boom is what's driving the rally of US stocks, and while everyone has a different outlook on how impactful (whether positive or negative) AI will be, it's hard to argue that it's purely a bubble with no content. What I _think_ though is that at this stage companies truly have no moat besides being able to access billions of cash to train big models while cash is very slowly being sucked out of the system by the Fed.  Even Nvidia's moat is not going to last long if AI pipelines become trillion dollar business -- capitalism dictates that no single company will have that much power over the future of AI, and even if companies don't step up to compete, the recently awakened US DoJ will file antitrust sooner or later (and probably sooner especially if Chinese firms fail to effectively compete!).

The AI bubble will probably be similar to the dot-com bubble -- the optimism is not unwarranted, but the timing of the rally is probably too early, and at this stage it may not be the big players who get it right. This time, it's harder to imagine some scrappy start up able to train models that are better than trillion-dollar megacaps, but it essentially already happened (OpenAI, Anthropic, Mistral). The real question is how do you monetize this crap?

So, despite agreement with the idea that asset prices will rise upon interest rates falling, I still see volatility in US stocks due to uncertainties of how AI impacts actual returns. I suspect its main short term impact is allowing firms to cut costs significantly, but I don't see how it would drive large (at trillion-dollar scale) profits yet. Given how cheap GPT-4 is (presumably they're not selling it at a loss, and inference costs will still become cheaper), it seems from the inference standpoint it's been a race to the bottom *already*. Most people will not be able to differentiate between "more intelligent" AI because they are limited by their own ability to differentiate (much like how human eyes cannot see above a certain pixel density, so there's a natural limit to how dense consumer displays will become). The only real impact is on accelerating R&D, and AI service providers probably won't really have that much bargaining power, or at least it would require a lot of hard work on the sales side, not something companies can scale up effortlessly to millions of customers with a button like web 2.0.

Another question is ... if we predict prices to go up once inflation subsides and interest rates go down, how much of this is already priced into the market?

Despite people saying the stock rally is due to optimism of interest rate cuts, it seems like the US treasury yield curve isn't saying the same thing. It became basically flat at 5% a couple months ago, and 10 year  still at ~4.2% as of writing. If I'm reading this correctly, this means the market is saying the US interest rate will stay at ~4% for the coming 10 years, OR the US has a ??% chance of defaulting.

So, if I'm not investing in the AI boom and I expect interest rates to go down relatively quickly in the next couple years, instead of buying stocks and riding the volatility, why not just buy 30 year treasuries at 4.3% yield and collect big profits when it comes down to 2% in two years? That way I don't have to bet that the "magnificent seven" or whatever is going to win the AI race. (FWIW, there's a bunch of antitrust risks for US tech as well. I still don't understand how they justify their PE multiples.... as I said, the US treasury yield curve doesn't seem to indicate the market has priced in the rate cuts. Stock investors are not bond investors, but the EMH has to account for something, right?) That said, most of Big Tech can probably continue to pump up profits for a couple years by old school cost cutting (MOAH LAYOFFS) and enshittification. (btw, Enshittification should have been a prediction made 10 years ago, it's a miracle we've gone so far.) Anyway, the point is that US tech stocks are too uncertain for lay people without a crystal ball to bet on. Too many unknown unknowns, and upsides are already priced in while downsides are ignored.

There's also a philosophical question raised -- given that I'm kind of rich-ish, as in my savings are much higher than my living expenses -- if I have enough to get by (as in, for pretty much everything), why do I even care about aggressively optimizing my net worth? I mean, it's always nice to be richer, but if you (for example) have enough to pay for pretty much all expected normal living expenses for the rest of your life, why do you care whether you're the 1% or 0.01%? To put it in a more sinister way, if the rich and poor divide becomes wider, I'm probably on the rich side. So do I really care whether I can buy merely 4 servants or 100 servants?

Especially so given that AI tech will likely make human servants mostly obsolete. Abundance doesn't go well together with harsh class stratification. Perhaps "compute" will become the new currency in the age of AI, but as I said, aside of niche applications, people don't really have *that* much demand for it.

What this all seems to imply is that our world as we know it will likely undergo rapid transformation. The political instability of traditional world powers; The apparent class stratification and challenges in the financial system (and we haven't even mentioned crypto currencies yet...); The information and intelligence abundance on the horizon... It feels as if there should be a very coherent narrative somewhere that looks obvious in retrospect, but maybe because I'm a person of the "old world", I'm unable to visualize what the "new world" with these inherent contradictions would resolve into.

Transformation is what the seers and psychics seem to have predicted too. In only this way it this consistent.

That said, given that the guy in youtube is a Brit, maybe his predictions are more applicable to the UK markets. One thing is clear -- despite the rate hikes in the past 2 years, the FTSE seems to have stayed flat or perhaps even risen a bit. So if traditional logic holds, rate cuts will spur the prices to rise significantly.

Seems like a good reason to buy, except that the UK economy (and perhaps more importantly, politics) is becoming shit real quick.

As I said, if you predict interest rates are going down a lot real soon, just buy 30 year bonds....

Saturday, March 16, 2024

Avoiding decision problems

I was kind of the influence behind Chaaaaak's paper on words.hk regarding "decision problem avoidance" ( https://repository.eduhk.hk/en/publications/building-cantonese-dictionaries-using-crowdsourcing-strategies-th ).  We discovered that for the purposes of documenting language usage, fitting reality into predefined boxes didn't work too well.

Back then I was still learning the fundamental concepts of distributed computing, and stumbled across Leslie Lamport's work, one of which was (unrelated to distributed computing) what he called the "Buridan's Principle", which states that a discrete decision based upon an input having a continuous range of values cannot be made within a bounded length of time. The argument (or proof, or whatever) Lamport used was very interesting, I'm not entirely convinced the proof is correct, but the conclusion (the principle) is not to be doubted because it is true.

I spent a while philosophizing on decision problems -- which not only encompassed the ones normally seen in computer science, but also the less conventional ones as well (eg. 愛就係選擇,選擇就係捨棄). Which naturally led to me seeing the process of squeezing language usage into predefined categories as the same problem... and knowing how true Buridan's Principle is, especially when applied to a crowd (sourced project), avoiding such problems was key to good design of the database framework.

And I guess I kind of left it at that.

Until recently when I am dealing with issues on dualism. (In ALL its meanings.)

It struck me that the root of dualism is the decision process. The decision causes duality. Decision problems cause the problems with duality. When we don't decide, we don't have problems, so to speak.

And once we realize this, it collapses all the weird and interesting problems in logic to nothing.


I'm still grappling how to do things in practice with such a worldview, and how far one can go in normal life without engaging in decision processes... and I suspect it converges with some traditional wisdom passed to us from an era where such things were still known.

Interpretations

Like, I already know this.

But it struck me that as long as we accept infinity,

*Everything* is inevitable.  The only question is why this, among all things.


Yes, Yes, and Yes

 

By the way, this basically proves as a side effect, that computation is in the eye of the beholder.

The game of life game is "just" a game of life game (so to speak). That we somehow interpret it as a Turing Machine "coincidence". Of course, that it is a game of life game is also an interpretation. All computation require intent and interpretation.

So, in the end, if we truly believe the universe is a giant computation... some people might think "for what? (it seems like a waste)", but to me, it's just more in-your-face evidence that WE are driving it through our intent and interpretation.

Our ability to interpret created the world we perceive.

Friday, March 15, 2024

Law of no unmanifested intentions

 啱啱發現,原來「愛就係選擇」係對 unmanifested intentions 嘅一個否定嚟。

難怪會有少少神聖力

Wednesday, March 13, 2024

Error by dualist design

It seems that if we postulate the existence of spirit as non-physical, and that they cannot directly act on physical laws, then it's likely sufficient to deduce the existence of fundamental errors (or at least randomness).

Let's say we created a simulated reality, where we design a 100% closed system that can be explained by the laws within itself without reference to the the outside (our human) world.

But as part of the design, we want human operators to have influence over it. More interestingly, we want humans to be able to control (at least to some extent) the entities in the simulated world.

Then, we'd have to introduce "errors" or "randomness" or "fundamental limits to cognitive ability" so that this is possible.

If everything was perfectly known, then it would be impossible for the world to have dualistic properties.

Sunday, March 3, 2024

"Individualized free will"

"Individualized" free will - :O

https://www.youtube.com/watch?v=wA0CAZOFNFU 26:50

- 所以佢哋通常都用 「我哋」 (We)

- 所以佢哋先會感受到啲訊息嘅流通

另一邊廂亦都 imply 人類嘅 free will 係相對同世界排斥嘅

基本上都唔係我已知嘅範圍以外,不過都要留底之後再諗清楚究竟有咩其他 implications...

Saturday, March 2, 2024

換書連貫

啲書除咗「動態根據我嘅狀況變出嚟」,仲似乎會可以喺唔同嘅書,讀到連貫嘅題材同故仔。睇呢本書明唔透唔緊要,嚟手攞起另一本書開始睇,就會搵到答案⋯⋯

老實講呢種玩法幾唔錯,應該繼續做多啲。