I highly recommend the book "The Machinery of Life" by David S. Goodsell. It isn't too long, the text is interesting with out getting too bogged down in the details (there simply would be too much to cover in detail), but the highlight is he a professor of molecular biology, he is an excellent illustrator, and the book is packed with detailed illustrations that alone make the book worth it. $18 on amazon. (get the 2nd edition, not the 1st edition)
Just do a google search for images on the book title and author name to see what is inside.
Sounds great! Along these lines, I've really enjoyed some of the iBiology[0] videos, especially the ones about molecular machinery. It's the 3blue1brown[1] of molecular biology.
This book is genuinely awesome. Im not sure why its so cheap in hardback but im glad it is. I buy almost anyone I know whos interested a copy. It's very good at giving you the right impression whilst being clear and easy to learn from. David Goodsell is also a really great guy and worth supporting.
I wish I understood where all this stuff came from. When we talk about evolution we talk about random gene mutations and natural selection, but it doesn’t seem like the machinery of the cell itself is described by DNA. How do cells themselves and all their internal machinery evolve?
If all you had was a genome, could you really use it to engineer the cell required for it to go inside of?
Luckily, someone did an experiment similar to this one. They wrote the genome (and ONLY the genome) of a unicellular organism synthetically. Then they've removed the genome of an existing cell of a different species. The've then inserted the genome they created from scratch. The genome worked and produced versions of everything it needed, and cell division happened etc. Everything was produced & worked.
So - they bootstrapped that genome using machinery of the empty cell w/o the genome, but from that point on, the new genome produced everything needed for many other cells & itself. (Venter's papers creating synthetic life)
In fact, following that, the same team refactor that genome to create a minimum viable genome, the MVP of genomes if you will, just to see how few genes you can get away with!
> If all you had was a genome, could you really use it to engineer the cell required for it to go inside of?
Definitively no if you consider multi-cellular organisms. And that's not just me being pedantic. Consider that every cell in your body, from the white blood cells to skin cells to neurons, has the same DNA. It's entirely non-genetic factors (epi-genetics like methylation of genes, but also hormones and existing intra-cellular structures) that leads to a neuron acting like a neuron instead of a liver cell.
Think about this: your body is made of protein, each protein molecule is being created by copying a sequence of DNA. So your DNA has to do a LOT of copying, and for each copy it has to be cut open and put back together. In fact DNA is working hard every single day to keep us alive - it's not just sitting there until cell division. Even duplicating itself is no small feat - the total length of DNA in a human is 10B miles, from here to Pluto and back.
This is a bad analogy - the instructions for making everything in the cell is in the dna, so you also have the blueprints for the hardware fabrication plant.
No, it's a pretty reasonable analogy. For the original point, if all we had was machine code, and that code had a complete description of the machine which could run it and how to create it, it would still be useless without a machine to process it. The code on its own would not be sufficient to create the machine.
You can't make use of DNA without all the complex machinery required to transcribe it, translate RNA and replicate it. There's a chicken and egg problem. The DNA does indeed encode the instructions to make and assemble all the rRNA, tRNA and protein sequences to do this, but you have to already have the machinery in place to do so. Kind of like compiling a compiler. You need an initial manual bootstrap, which in the case of life as we know it, took place many millions of years in the past. Just as the code for a compiler is just so much meaningless ones and zeroes without a working compiler to process it, so is DNA in the absence of the translation machinery. Without any context for how to process it, it's just a meaningless jumble of bases.
One thing to think about. If we discovered intact dinosaur remains with non-degraded DNA, could we resurrect it? We don't have the machinery since it was lost with the death of the organism. But we could potentially bootstrap it by placing it in the cell of a related species, e.g. a reptile. But if it was a completely different form of life, we wouldn't even know where to begin.
Comparing genomes to computer operating systems in terms of the topology and evolution of their regulatory control networks.
Yan KK, Fang G, Bhardwaj N, Alexander RP, Gerstein M.
Proc Natl Acad Sci U S A. 2010 May 18;107(20):9186-91. doi:10.1073/pnas.0914771107. Epub 2010 May 3.
> "The definitive feature of the many thousand cis-regulatory control modules in an animal genome is their information processing capability. These modules are “wired” together in large networks that control major processes such as development; they constitute “genomic computers.” Each control module receives multiple inputs in the form of the incident transcription factors which bind to them. The functions they execute upon these inputs can be reduced to basic AND, OR and NOT logic functions, which are also the unit logic functions of electronic computers. Here we consider the operating principles of the genomic computer, the product of evolution, in comparison to those of electronic computers. For example, in the genomic computer intra-machine communication occurs by means of diffusion (of transcription factors), while in electronic computers it occurs by electron transit along pre-organized wires. There follow fundamental differences in design principle in respect to the meaning of time, speed, multiplicity of processors, memory, robustness of computation and hardware and software. The genomic computer controls spatial gene expression in the development of the body plan, and its appearance in remote evolutionary time must be considered to have been a founding requirement for animal grade life."
That was interesting, but I think their conclusion about the linux kernel being comparatively top-heavy with relatively few "work horses" was probably greatly influenced by the fact that they only analyzed the linux kernel, not linux kernel + device drivers. IMO, device drivers should be considered part of the OS - really, the work horses of the OS, not called by any other code.
Actually it's a very misleading analogy. In biology, that's precisely what is happening: The "kernel code" i.e. our genome in fact does have all the instructions on how to build a cell, how to build tissues, how everything communicates, etc. (my education: BA in math, PhD in Biology)
The point I was trying to illustrate was that neither operating systems nor genomes contain the information needed to specify their required operating substrate and environment.
Perhaps it may be possible to derive or infer these requirements through simulation and analysis of the OS or genome, or perhaps not.
I guess it depends on the degree to which the "code" defining the system is abstracted from its operational embodiment, i.e. is the operating system in question encoded in the form of a Hardware Description Language [1], FPGA IP cores, or more abstract high-level source code?
I assume it would be more difficult/impossible to work out the hardware requirements for an OS given the just the high-level source code (are compilers included?) vs a low-level or "bottom"-level (hardware-level?) code.
Likewise for a genome, I don't think the sequence of As, Gs, Cs, & Ts specified in an organism's reference genome [2] entail the chemical and physical particulars needed to instantiate the genome in an environment (physical, virtual, whatever) such that it functions. On the other hand, if you gave me an actual genome comprised of purified genomic DNA, then I'm getting a big hint about how the code needs to be physically instantiated for it to work. From this hint, maybe a near-omnipotent reverse-engineer could infer the biochemical requirements (i.e. cell-free expression system or a donor cell) needed to boot up the organism.
Am I just being pedantic or do you see what I'm trying to get at?
No, genome contains almost all of the information about the running environment. The epigenome is the rest.
Both of these even contain information about build environment.
Given high genetic mastery you would be able to figure out the conditions for the whole organism to grow, including required feedback loops. Of course, we're not even close.
Generic code is somewhat close to a quine if you look right at it.
I'm not sure if you could work out from a genome what the cell should look like - probably not.
An important point is that cells are always born of other cells. In other words, a cell replicates by dividing itself, building the parts as it goes along.
So there is never such a thing as a 'naked' genome in biology. There are things like viruses, that hijack cell machinery. There are sperm cells, that carry some DNA into an egg and use that. There is the DNA in mitochondria that rely on the host cell for some proteins.
If you trace backwards, the genome and it's substrate (the cell) have co-evolved to work together. The existing cell acts as the template for the organisation of a new cell, while reading the genome for the structure of its parts. Of course, the genome contains information about when to make the parts, and how to regulate them.
The information processing capability of cells, and the fact that single-celled creatures have behaviour, makes it hard for me to believe that neurons (not just in humans, in all animals with neurons) don't use any of this information processing capability.
I understand researchers have claimed to replicate the brains of some very simple animals (having something like 306 neurons)... but have they replicated the behaviour, i.e. for the same initial and ongoing inputs, you get the same outputs as the modeled brain? That should show the accuracy of the neuron model.
Cells are extremely crowded and busy... and random and violent places.
Molecular biology dances in a nano moshpit from hell.
The Inner Life of a Cell is badly misleading. It reinforces many educationally toxic misconceptions. Yes, the graphics technology was limiting. But the video doesn't attempt to mitigate the negative impact. For instance, adding a couple of frames of Goodsell's crowded proteins, fading to sparseness, could have reduced its reinforcement of the "big empty cell" misconception.
And it doesn't just hide almost everything (crowded), or slow everything way down (busy), or show completely aphysical rendezvous and docking (random), but it also strings together carefully selected snapshots to tell a misleadingly peaceful narrative (violent).
That kinesin walking along, towing a vesicle? Its "feet" are actually madly flapping around. It is randomly "stepping" back and forth - only its net motion is biased. And between one net step and the next, that big vesicle has been slammed around to every position within reach of that tether. It's a random walk on a leash. Reality isn't a donkey quietly towing a slowly-moving barge. It's a balloon flapping madly in a hurricane, tethered to an intoxicated panicked mouse clinging to a rope.
Biology rides a ragged energetic edge, between things being too strong, too expensive to dis/assemble, and being smashed to pieces too quickly.
More generally, the video illustrates a pervasive problem with science education media. Even when done "well", some aspects are done with great skill and care, while other aspects are silently left utterly bogus. Pity the students, who lack the knowledge to sort out which is which. So rich ecologies of misconceptions are established. And are intractably expensive to displace. It's so easy to create content that leaves students worse off than if they had seen nothing at all. So that's, in general, what we create.
The violence of nanoscale is a critical, defining characteristic. That so many biology undergrads are unclear on it, let alone high-school and primary students, shows just how very far we have to go before teaching deep understanding, transferable knowledge, and cross-cutting principles, finally becomes a reality. Or more upbeat, how breathtakingly awesome science education might eventually become.
(FWIW, the top section of my very crufty http://www.clarifyscience.info/part/Atoms might help some with getting a handle on at least the size aspect of small things.)
> It's a balloon flapping madly in a hurricane, tethered to an intoxicated panicked mouse clinging to a rope
Welp, my desk is now covered in coffee. Thanks for that one, I'll be stealing it ;)
It may be better to think of a cell as more of a liquid crystal than anything else, considering the densities. The synapse of neurons is incredibly dense with structures and scaffolding proteins. Even most grad students still think that the ER is a soma kinda thingy, but it goes all the way up the dendrite to just kiss the synaptic buton. The ER is really more of a cell inside a cell, or rather the cell membrane is just a latex bodysuit for the ER.
It's always amazing to me when I heard guys like Elon and Kurzweil talking about MMIs and how you are never gonna actually die, you'll just upload your brain, somehow. Like, they really have no idea how clueless we are in biology. Eric Schmidt goes to parties in West LA about living forever. It's crazy. We are decades away from even a cursory understanding of 1 cubic mm of brain tissue, with all the neurons, astrocytes, glia, mesenchymal cells, etc. Like, we just found out that the immune system is in your brain too, like 3 years ago. I actually think Elon has better odds of putting 0.6 of a Saturn V on Mars and not having it explode before we have a MMI.
> liquid crystal [...] cell membrane is just a latex bodysuit for the ER
Has anyone seen an attempt to collect or catalog such analogies? "One way to think about <concept>, is <one-liner or short story>." Maybe something like http://bionumbers.hms.harvard.edu/ , but for analogical conceptual understanding, instead of rough quantitative understanding. My very fuzzy recollection is bionumbers grew out of one grad student thinking "I don't have a quantitative feel for my area", and starting to collect numbers. Could we do that for stories?
> upload your brain [...] crazy
Well, there's https://xkcd.com/793/ (clueless physicist meets unfamiliar field). But also... part of what makes science work, is fear of being embarrassed by getting it wrong in front of the community of one's peers. And to avoid that, extensively sanity checking with local peers. Trying to think clearly without those, using literature and smarts, but disconnected from the field(s), is hard. And bonus hard points for multidisciplinary questions.
"The simulation shows 1000 individual macromolecules diffusing, colliding and transiently associating with each other over the course of 10 microseconds of simulation; the translational diffusion coefficient of the GFP in this model is in agreement with experimental measurements."
Diffusion, Crowding & Protein Stability in a Dynamic Molecular Model of the Bacterial Cytoplasm. PLoS Comput Biol 6(3): e1000694. doi:10.1371/journal.pcbi.1000694
(Note that the proteins and other macromolecules in the visualization are all individually constructed of hundreds of atoms — tRNA is a common one and has ~729 atoms; other, bigger proteins have thousands; and h2o of course has just 3. Here is an illustration from David Goodsell’s newest book showing the scale: http://imgur.com/a/shqME)
Thank you for this writeup that peels away a layer of simplification and more vividly and accurately explains what is going on. you have a way with words. The intoxicated panicked mouse will stay with me forever.
That being said, I don't agree that it is a pervasive problem that simplification and abstraction is used in teaching. New concepts have to be introduced piecemeal. If a new topic was explained to an audience of neophytes in all its gory details, the effectiveness of the knowledge transfer would be low. Some may grasp the complicated details and form a valid mental model. However, many will be put off, as it is too much stuff at once.
Teaching is about introducing a simpler model first. Such a model is a rough abstraction with tons of simplifications and inaccuracies. Once understood by the student, it is about introducing a more intricate and realistic model, while demolishing parts of the simpler one that no longer fit. Once this more intricate model is internalized, another even more detailed one can be taught. This goes on until you get to the limits of understanding of a certain system or process. At that point, the student has the option of becoming a researcher and forging new models that have not been created before.
You can not dump the most detailed and intricate model on new students. At the same time, a good teacher is cognizant of the fact when their students have a solid enough grasp of a given model to start introducing the more intricate one, while dismantling the old one. It's a tough balancing act.
Yes, our understanding gets more refined the more we learn. It remains messy, error-prone and incomplete. So the solution can never be to learn things 'well' or get things right first time or produce perfect teachers/videos or something like that. The solution is to go on correcting misconceptions where we are interested. Ken Shirriff's blog, mncharity's website, comments here and so on simply are part of the ongoing spontaneous correction process. Like the contents of a cell, they may look crazy and disorganised but they get the job done.
Hi erasmuse, the Fallible Ideas mailing list (http://fallibleideas.com/discussion) would be a more fruitful place to discuss your ideas. There, openness to new ideas is valued, as is being willing to discuss disagreements persistently until resolution.
I doubt it. New ideas develop slowly and can't be communicated or explained until they are ready. If you look at creative intellectuals they work alone or, rarely, in pairs. Discussion to the point of resolution would be more like politics or opinion-leading; perhaps necessary for defence or for sorting out existing ideas but otherwise harmful to progress.
Work has multiple parts. Some work is done alone, some in pairs, and some in a public group. The public group part has value and importance: for example, getting more variety of criticism and other feedback such as what people don't understand. Work done in a public group also helps others learn, so that's good.
If, hypothetically, you should join the group, in what way could you find that out? What would change your mind?
Something like historical examples of fundamental breakthroughs in science or great works of art produced by committee.
Criticism is for stuff one doesn't like, which is why it belongs firmly in the public realm of news, politics, etc. A group isn't public. When it comes to your private work, ignore criticism and trust your intuition. Ideas need room to grow just like children do.
I think of this as the "iterated lying" and "accumulated models constitutes understanding" approach to science education. :)
Consider giving a briefing in the military, or to business management, or in a professional consult. Yes, you need to simplify. But distorting results, missing the point, saying things with little connection to reality, and being disorganized and incoherent, rises towards professional misconduct. Describing a foreign culture to generals, or tech to middle management, doesn't seem incomparable to describing the physical world to kids. Science education content currently gives a really wretched briefing on the physical world.
> student [...] demolishing parts of the simpler [model] that no longer fit [...] This goes on until you get to [...]
becoming a researcher
But that doesn't appear to be what's happening now.
Chemistry education research describes chemistry education content as incoherent, leaving both teachers and students deeply steeped in misconceptions.
But maybe in college? First-tier astronomy graduate students are often unable tell a 5-year old what color the Sun is, without getting it wrong. First-tier medical school graduate students seem to have little grasp of cell size. These misconceptions and gaps then impair other foundational understanding. It is hard to alter misconception ecologies. Think kudzu.
But maybe researchers? Active researchers chop back their own kudzu in their specific research area. And variously trim it in areas they teach. But the ramshackleness of people's understanding increases rapidly, even within their field, as you move away from their narrow research area, and off into their kudzu forest. Where everyone else lives.
> [paraphrased:] a balancing act of introducing and dismantling increasingly detailed and intricate models
What does "teach friction" mean? Does it mean teaching kindergarten kids what can make them slip and fall? And behavioral and engineering strategies for avoiding that? Sock nubbies need to be on the bottom? Or does it mean teaching them, years later, algebraic plug-and-chug of Arrhenius' law of large objects sliding on pig fat? And for the pig fat model, should they develop a feel for reasonable sliding numbers? Should they be able to judge how well unfamiliar situations match the model?
Imagine being in a 19th-century one-room schoolhouse with a book or two. You might aspire to teaching plug-and-chug, on models students won't use in the real world. Imagine being in a 2017 classroom. You have plug-and-chug. You might aspire to teaching numeracy and transferable knowledge (can be applied to unfamiliar problems), but it would be hard, given the constraints you face. Now imagine a 2037 classroom. The kids have had AR their entire lives. Hybrid computer-human systems have dropped the currently ghastly-high cost of pulling together insights from very large numbers of busy and expensive domain experts. So what could you aspire to?
Might we aspire to a hands-on deep-and-broad understanding of the physical world? It's straightforward to teach early primary students bits of foundational knowledge that graduate students should have, but often don't. It's currently too expensive to scale that. Even considering the positive feedback of getting things "right". But costs are declining. Maybe we'll hit a new regime, as bizarrely unfamiliar, as say expecting peasants to learn to read?
When we look, our visual system comprehends in terms of boundaries and empty space.
Thus we know the location and motion of the different things we see.
But "surfaces" and corresponding "empty spaces" don't occur at the molecular scale in living things. (What makes something a discrete "thing"? What makes something "empty"?)
In cellular environments, nothing is empty, everything is touching, nothing is static, everything is in chaotic motion.
Thus molecular reality presents something of a visual paradox and a philosophical puzzle: how can we visualize something that is concpetually incompatible with how our visual system sees?
It would excellent if there was more focus on this problem in the HCI communities (looking at you ACM SIGGRAPH).
Trajectories help. Some atoms travel together, others don't. And they go different places. Families on public transit are localized, and get on/off together. Cars are mixed together on the road, but with tracking data, you can tell the taxis from the commuters. And trajectory paths can be simplified, and described in aggregate ("average time to reach X").
Conformation spaces help. A thing changes shape, but the shapes are drawn from a family of likely possibilities. Your arm is likely to remain attached to your shoulder, and is less often raised up than not. Even in a tangled rugby ruck.
Some things are ambiguous, but others not. Youtube had a nice ab initio molecular dynamics simulation of water (which I've been unable to find again). As a H+ proton wanders, sometimes it seems bound to a particular water, and other times, there's more of an area effect involving several. But a water molecule's nuclei generally stick with each other.
The usual electron-density isosurface depictions have disadvantages. Picture traffic cones, say arranged in a tight grid. They have a peak, and their rims touch. Like the electron densities of atoms. But visually it isn't a problem. So replace the isosurface with a cloud, or just turn down the surface threshold, and the view is as empty as you want it to be. An electrostatic suspension of tiny flecks of neutron-star-like material floating in fluff.
With eye tracking and XR, one might play games of visual selection. Show a 3D mess of proteins, clearly rendering one protein deep inside, but leaving enough visual clues of the others, to serve a focus targets. When your eyes shift target, change which protein is rendered clearly. Or do more with that selection, like showing trajectory, or showing other proteins too, related by type or process or history. You might "see" a 3D gestalt.
This bit was bugging me, since the video undoubtedly was, and is, still useful.
One resulting (late night) thought, is that educational content might be viewed a complexes. A subunit which is toxic and/or dysfunctional in isolation, might be powerfully useful if other subunits or chaperones or cofactors are present. One might add content which prevents the misconception formation which would otherwise occur.
So Inner Life, if used in isolation, is badly misleading. Raising an interesting question of what it should be paired with, and how.
There may be ordering and timing constraints. Asking an eye witness a memory distorting question, isn't reversible - the damage is done. "How fast was the car moving?" "Did he have a beard?" So when seeing misconception-forming content, it may be better to inoculate first, than to treat afterwards.
And longer-term issues. If one subunit is more memorable than another, or otherwise gets more spaced repetition, it might end up isolated later. "The only thing I remember about C is X", where X by itself, is a toxic summary C.
You don't have to teach everyone everything. This is a misguided belief of the modern western education system that evolved from Christian roots.
As Bruce Lee would say - I cannot teach because I don't believe in systems or methods. So how do I learn? Only when I look for the cause of my ignorance.
The masses don't spend their time looking for the cause of their ignorance.
When doing adult outreach, I sometimes get feedback like "Atoms. I hated learning about atoms. And I've never used any of it. Why worry about teaching atoms? Teach writing and collaboration instead." And I say something like the following.
Leaving aside the question of whether atoms should be taught, if we do spend time teaching such, it would be nice to succeed at teaching it, which we currently aren't.
But bigger picture, I speculate that the unpleasantness and lack of utility, are caused by how atoms are taught, rather than by the subject of atoms. Perhaps taught much better, atoms might hang together with lots of other things (interwoven finger gesture), creating fun and powerful insight into the world.
> You don't have to teach everyone everything.
There's an old idea that the way to learn history, is to start with whatever interests you. Because history is such an interconnected tapestry, that it doesn't matter which tread you start on. If you like tiddlywinks, you'll soon encounter materials, and trade networks, and game evolution, and social partitioning, and... everything.
Science and engineering are also richly interwoven webs. So why can't you do something similar? Because our science and engineering expertise is too scattered, insufficiently disseminated and integrated. And with past tech, that was too expensive to change.
With better tech, and a lot of societal effort, that might change. So "anyone could learn about anything". Rather than almost immediately hitting a story of "Ah, that touches on several areas. And I'm sorry, textbooks handle all of them badly. You might spend a few months getting up to speed on the research literature in each area, but even then... Well, you really have to move across the country for long in-person conversations with one of a few professors (crazy busy) or their students, and then do that again for each other area. But... have you considered just giving up and crushing your interest now?"
I spoke to a colleague who had done some work in his PhD in the physical chemistry of cells. Concepts like 'liquid' do not translate well at the intra-cellular level. 'holes' or 'channels' in a membrane, again are mostly about metaphors rather than mechanistic statements of completeness. he said for some things, an Escher infinite-space filling grid was as good a metaphor, for the cellular structure that the various organelles negotiate. (this btw, is asinine, and would deserve a "physical chemist here: this is bullshit" response)
In my experience, K-12 and 4-year college were all about "useful abstractions and assumptions" to help people understand concepts, and grad material was about slowly peeling away those assumptions and abstractions that made it convenient to learn and teach but were never really there (at least not guaranteed to be there).
One of the best teachers I had forced us to write or assumptions down every time for every problem on the test. If you didn't write any, you got zero credit. "What's the point in learning equations if you don't know when they stop applying?" You also got _negative_ credit if you applied the wrong assumptions because you were just guessing or going on habit.
Although there is truth in the post, this is not entirely correct. Many chemical and biological processes take place in a much slower timescale than hundreds of thousands per second, many miles per hour etc.
These processes include DNA synthesis/cell division, transcription, the transmission of electricity / ions between neurons, and many other basic processes.
Also, Many proteins don't simply float every around the cell at these massive speeds - they form stable, localized protein complexes that keep on doing what they're doing at the same location for quite a while. That's why we can image them and make real-time movies at human scales of perception (seconds, minutes etc.).
Even more so for more complex processes - this is also why cellular motion takes quite a while - go on youtube and watch dictyostelium cells move towards folic acid. It takes quite a while, many hours in fact despite the presence of a very clear folic acid gradient as the signal.
Also, speeds like "100 times per second" should be scaled down with the size scaling, if you want a feel for mechanics. Times Square is about 1km long, and a eukaryotic cell around 10 microns, so scale by 100 million. (A pretty typical protein, 5nm across, becomes 0.5 meters across in Times Square.) From this perspective even the rapid purposeful actions the article talks about go extremely slowly: 100Hz becomes a million seconds per action -- a couple weeks. The biomolecules do whip around at random very fast -- but net progress happens only after a lot of wriggling.
> Also, speeds like "100 times per second" should be scaled down with the size scaling, if you want a feel for mechanics.
I'm a little confused, are you saying that the real speeds and rates in the article/this thread are all too fast by a factor of 100 million, or are you saying that its useful to perform this scaling mentally when imagining these systems to get a better intuition of their mechanics...?
The latter. Maybe I should reread the post, but it invites you to visualize a cell as Times Square -- scaling the sizes -- and then the unscaled speeds give the wrong impression within that visualization. (Mechanical properties like stiffness don't vary when you scale space and time together.) Outside of it, of course the absolute speeds are what you want to know.
Part of the reason why neurons, eyes, nose and inner ear use special superstructure and electrical ion interfaces. These vastly help organize and speed up chemical transport.
Cytoskeleton (which is partly responsible for cellular transport) is implicated in being part of long term memory.
I found Goodsell's images of the crowded intracellular environment in E. coli, useful visualisations in developing a model of a living cellular process, namely, transcription control. Transcription is the first step in gene expression: in E. coli, RNA polymerase transcribes a complementary copy of a gene, namely, messenger RNA, for further processing to protein. The transcription process is controlled by proteins that either compete with RNA polymerase for the start site of transcription (turn off the gene) or bind adjacently and promote transcription (turn on the gene). The difficulty I had in constructing a mathematical model of transcription control was that I wanted to include nonspecific binding, where RNA polymerase binds with low affinity to random stretches of DNA. While such binding occurs with low affinity, the sheer length of the DNA meant that a significant proportion of the RNA polymerase was bound in that form. Fortunately, I became aware of the work of people like Allen Minton (NIH) and Tom Record (U Madison-Wisconsin) who studied molecular crowding. To borrow a sentence from Wikipedia, "[H]igh concentrations of macromolecules reduce the volume of solvent available for other molecules in the solution, which has the result of increasing their effective concentrations." https://en.wikipedia.org/wiki/Macromolecular_crowding I found (I hope correctly) that Tom Record's quantification of crowding, as affecting a 100-fold increase in concentration, "exactly" compensated for the reduction due to nonspecific binding.
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5425810/
If it helps, consider that you are the descendant of the original "grey goo" which since pervaded the surface of an entire planet. The inheritor of a billion years of techniques in construction, alliance, deception, and warfare. A uniquely coordinated swarm of assimilating nanotechnology, poised to leap into the cosmos.
Anything can be described both fancifully or dreadfully. One can say we are blobs of goo, biological automotons living on a speck of dirt in a googlplex of a cosmos. Or one can say that we are the result of billions of years of evolution on top of natural laws that first produced symmetries, conserved quantities, and higher level order like atoms and then stars - that then birthed heavier elements that eventually became us. So when you look up at the hard mineral made stars, remember that goo is just a mindset, for those very stars are effectively the embryonic stage of life itself..
Be thankful you're not a Martian Print Amoeba, otherwise someone might kill you and throw your cadaver into a vat of fixing chemical while you're emulating an iPhone.
Not that many actually, the estimations about people being made of more non self stuff than selfstuff were based on no serious measurement and its more of an urban legend than a fact.
Current estimates are that there are slightly more bacteria than human cells (3.8x10^13 vs 3.0x10^13) in a "reference man". However, the mass of the bacteria is only 0.2kg, so human cells totally dominate by mass. Interestingly, most of the human cells by count are red blood cells.
Nope. Its no urban legend. Recent findings now suggest there are free tissue bacteria species previously unknown about in humans. So the numbers of bacterial cells to human is only going up with understanding not down.
This article mentions two things as if they are unrelated. That the cellular environment is crowded is exactly why you need transport proteins and vesicles. All the really fast proteins are mostly going nowhere, they're just wobbling on the spot bumping into everything crammed next to them. Sandwiched most of the time between membranes they cant diffuse through freely. Its like angry jelly not hot soup.
"As a result of all this random motion, a typical enzyme can collide with something to react with 500,000 times every second. Watching the video, you might wonder how the different pieces just happen to move to the right place. In reality, they are covering so much ground in the cell so fast that they will be in the "right place" very frequently just by chance."
That's the problem with those visualizations - they aim to give one an intuition of what's happening, but in fact they mislead. I wonder how it's possible to give an intuition of the reality, when in addition to the crazy statistical noisyness of the scene, those molecules are also in quantum superpositions.
Just do a google search for images on the book title and author name to see what is inside.