Motivations

Posted on Aug 5, 2021

I have always felt that there was something missing to classical physics. Over the years, I was able to pinpoint a number of things that created that sensation:

  • The lack of “meta-discussion” about what really means to define and isolate a system for studying it. Why can’t I put a formula, or a strict declaration, on the system that is being defined? Why isn’t there a meta-language to describe the system, something that standarizes what am I putting inside and outside of the system, instead my teachers keep insisting in colloquial descriptions of bodies with weights and their positions? If there is one, why isn’t that just general knowledge, that anybody knows about?
  • The names “Entropy” and “Free Energy”. What are them, other than a sleight-of-hand to avoid talking about the lack of tools to describe information flows in the system?
  • The relatively low mention of probabilities about different outcomes. This wasn’t actually the case when going to quantum physics or chemistry, where things become less abstract and they need better definitions to match experiments in real life.

Infophysics is my personal journey through exploring how reshaping and introducing information into mechanical models look like. I only have some vague questions and ideas over how would that look like, and plan to expand on them in the future:

  • Consider two systems: System A and System B. In traditional mechanics, both are equal: a man sits on top of a hill. There is a big rock next to him. There is some pray downhill. Man on system A knows that pushing the rock would kill the pray and allow him to eat. The man on system B doesn’t. How is that expressed in physics? The concept of Gibb’s Free Energy comes to the rescue. We say that there is more GFE in the system A than in system B, due to the knowledge contained in the brain of our system A inhabitant. People also say that the Entropy of system A is lower. What are the implications of this?
  • What does it look like when the probability of external events are modelled by a subsystem, stored internally (at a cost), and that subsystem reacts differently over time upon such external events, leading to outcomes that might replicate the subsystem or destroy it? Can we treat life as a physical mechanism that replicates itself thanks to a model of external threats to the survival of its genes? How does that align with the entropic death of our universe? What is the best model to represent a constantly changing environment?
  • What are the economics of storing information on systems? Can we have a general purpose formula to establish what is the fragility of a system, clarify what assumptions is it making, in order to understand it better? Why big social media networks have an economic advantage over small-scale networks? What would be an environmental change that makes their information obsolete?
  • I’ve never observed a quark, but I did replicate the double slit experiment. What does it mean, how come our attention and the observation of different things at different times change how the die are cast? Is someone thinking differently about this?

This message was last updated on October 5th, 2021; and it’s intended to be a continuous journal of my exploration about this subject. I’d be very grateful if you have any info to help me on my journey. Contact information will be up after setting up some secure system for messaging.

Cheers,
Claude