Hacker Newsnew | past | comments | ask | show | jobs | submit | sigmoid10's commentslogin

The issue here is that the people commenting on whether something is a good or bad idea usually don't have the necessary insight to give useful comments either way. But with certain trendy topics, many people still feel the need to express their shallow opinions. That is especially true on HN, because many like-minded people will chime in, upvote and increase visibility as long as they themselves feel validated, irrespective of whether what was said is true or not.

In fact I'd love to see an inverse to this list. I.e. shit people celebrated here that failed miserably. Although failure as a business can have many reasons and must not necessarily be due to the core business idea. It's probably much harder to get this data than searching early HN threads for high value IPOs. You'd have to search for popular threads and then track down the companies and find out what happened eventually.


It also varies inside countries. Some priests are simply more demure than others. The church as an institution certainly prefers the more radical conservatives as you go higher up the chain, but many low level employees that still talk to commoners do realize that these views are going to put off more people than they attract in developed countries. So in the long term they will only be left with a bunch of crazy radicalists and a silent majority that wants absolutely nothing to do with them.

[flagged]


Perhaps you could share your alternative characterisation of the church to clarify what you mean?

I would say that the burden of proof is yours first.

But since you asked...

> The church as an institution certainly prefers the more radical conservatives as you go higher up the chain

Where are these "radical conservative" bishops? They're anything but "radical". If anything, they tend toward a soft middle that is very slow to act. Indeed, that's one of the gripes "radtrad" types tend to have. They would prefer more bishops were made in their own image.

Instead, we see bishops aggressively curtailing more traditional expressions of the faith, while permitting plenty of liturgical abuse of, shall we say, a decidedly "untraditional" stripe.

> So in the long term they will only be left with a bunch of crazy radicalists and a silent majority that wants absolutely nothing to do with them.

You can't be serious. If anything characterizes the post-Vatican II Church, it has been the greater influence of "progressive" and "modernist" elements, some of them quite radical. Only in relatively recent times are we seeing a growing, younger crop returning to traditional forms. You can expect that the Church will look more traditional within a generation or two.

Your claim reminds me of those who clamored to make the Church more "relevant". They claimed that if the Church didn't do so, it would lose the youth and imperil the future of the Church.

Instead, what we saw was the reverse. As the Church became more "relevant" - which is to say, more concerned with the temporal and the temporary, conforming to the times instead of shaping men and the times - it became less appealing to the youth. It should be obvious in retrospect. What people desire from the Church is the eternal and the transcendent, not more of the same that you can get elsewhere and in bulk.

So, all that "relevance" produces is a large exit of the youth from the Church. Attend a "progressive" parish and you'll see plenty of empty pews with a few aging boomers. Go to a more traditional parish, and you see the pews brimming with families. These are not isolated cases. These are broad trends.

If you do see a swing toward the traditional, it is not because "crazy radicalist conservative" bishops are concentrating those elements, but because of a process of natural selection. "Relevance", it turns out, is dysgenic. And as the traditional element increases and becomes more visible, so does the visibility of its substance, which is what attracts converts and reverts.


Blowing more than 800kb on essentially an http api wrapper is actually kinda bad. The original Doom binary was 700kb and had vastly more complexity. This is in C after all, so by stripping out nonessential stuff and using the right compiler options, I'd expect something like this to come in under 100kb.

Doom had the benefit of an OS that included a lot of low-level bits like a net stack. This doesn’t! That 800kB includes everything it would need from an OS too.

Maybe you’re misremembering or referring to Doom (2016). The original Doom was developed for DOS and id had to build a lot of its own network stack. BSD style socket based networking wasn’t a given in DOS.

Still, zclaw is an impressive achievement.


yah my back of the envelope math..

the “app logic”/wrapper pieces come out to about 25kb

WiFi is 350 Tls is 120 and certs are 90!


> vastly more complexity.

Doom is ingenious, but it is not terribly complex IMHO, not compared to a modern networking stack including WiFi driver. The Doom renderer charm is in its overall simplicity. The AI is effective but not sophisticated.


The whole ESP32 Libraries are kind of bloated. To enable Bluetooth, WiFi or HTTP handling, you need to embed some large libraries

yeah i sandbagged the size just a little to start (small enough to fit on the c3, 888 picked for good luck & prosperity; I even have a build that pads to get 888 exactly), so i can now try reduce some of it as an exercise etc.

but 100kb you’re not gonna see :) this has WiFi, tls, etc. doom didn’t need those


Everything connected to the internet was really bad until automatic updates that are enabled by default (or enforced by sysadmins) became a thing. Wordpress, Mysql, Active Directory... all those things had unpatched exploits that you could trivially tap in to until the 2010s if you knew how to use nmap and metasploit. Add insecure wifi standards like wep and basically every other network was fair game for people who had some basic skills. Heck, facebook only made https mandatory in 2013 after someone made a browser plugin that let literally everyone steal cookies on public networks and log in to other people's accounts. Gen Zers never saw this, but the modern web as a secure place where you can comfortably buy stuff or do banking without worries is a relatively recent invention.

It's also heavily influenced by businesses. Most employers will happily hand you an Apple or Android phone for work, but I don't think there is a single company out there that would dare to hand normal people an Ubuntu Touch based phone.

This theory has been put forward, but it's important to point out that there is no real evidence yet. An alternative theory is diet, which is also the leading theory for increasing incidences in non-athletes. Highly processed, calorie dense foods have been on the watchlist for a while, and ultra endurance athletes have a special need for these to satisfy their caloric requirements. It could also be a combination of these factors or something else that was missed entirely so far.

For me it's Opus 4.6 for researching code/digging through repos, gpt 5.3 codex for writing code, gemini for single hardcore science/math algorithms and grok for things the others refuse to answer or skirt around (e.g. some security/exploitability related queries). Get yourself one of those wrappers that support all models and forget thinking about who has the best model. The question is who has the best model for your problem. And there's usually a correct answer, even if it changes regularly.

Yes I came to the same conclusion. Just to add: be careful with Opus 4.6 guys. It’s expensive…

Using simtheory.ai which is very good, you can switch models within a conversation and use mcps

Are you associated with this somehow?

What you are describing is a failure to integrate AI into said company systems. I have seen quite a few companies now that buy MS AI products with great hopes only to be severely disappointed, because they may as well have just used vanilla ChatGPT (in fact then they would at least get newer models faster). But there are counter examples too. If you can pull all your company documentation into a vector db and build a RAG based assistant, you can potentially save countless hours across your workforce and possibly customers too. But this is not easy and also requires some level of UI interactivity that noone really offers right now. In fact they can't offer it, because you usually need to integrate ancient, arcane sources into your system. So you do have to write a lot of integration code yourself at every step. Not many companies are willing to spend that kind of money and effort, because managers just want to buy a MS product and be done with improving efficiency by next quarter.

I have been using vector based RAG for about two years now, I am not knocking the tech, but last year I started experimenting with going way back in time and also in parallel trying BM25 search (or hybrid BM25 and vector). So: not even a very good example use case of LLMs, the tech is not always applicable.

EDIT: I am on a mobile device and don’t have a reference handy but there have been good papers on RAG scaling issues - basically the embedding space gets saturated (too many document chunks cluster in small areas of the embedding space), if my memory is correct.


Depends on your use case. A system that can do full text and semantic search across a vast archive, open files based on that search to retrieve detail and generate an answer after sifting through hundreds of pages is pretty powerful. Especially if you mange to pair it with document link generation and page citation.

The first commit was 17k lines. So this was either developed without using version control or at least without using this gh repo. Either way I have to say certain sections do feel like they would have been prime targets for having an LLM write them. You could do all of this by hand in 2026, but you wouldn't have to. In fact it would probably take forever to do this by hand as a single dev. But then again there are people who spend 2000 hours building a cpu in minecraft, so why not. The result speaks for itself.

> The first commit was 17k lines. So this was either developed without using version control or at least without using this gh repo.

Most of my free-time projects are developed either by my shooting the shit with code on disk for a couple of months, until it's in a working state, then I make one first commit. Alternatively, I commit a bunch iteratively, but before making it public I fold it all into one commit, which would be the init. 20K lines in the initial commit is not that uncommon, depends a lot on the type of project though.

I'm sure I'm not alone with this sort of workflow(s).


Can you explain the philosophy behind this? Why do this, what is the advantage? Genuinely asking, as I'm not a programmer by profession. I commit often irrespective of the state of the code (it may not even compile). I understand git commit as a snapshot system. I don't expect each commit to be pristine, working version.

Lot of people in this thread have argued for squashing but I don't see why one would do that for a personal project. In large scale open source or corporate projects I can imagine they would like to have clean commit histories but why for a personal project?


I do that because there's no point in anyone seeing the pre-release versions of my projects. They're a random mess that changed the architecture 3 times. Looking at that would not give anyone useful information about the actual app. It doesn't even give me any information. It's just useless noise, do it's less confusing if it's not public.

I don't care about anyone seeing or not seeing my unfinished hobby projects, I just immediately push to GitHub as another form of backup.

I don't care about backing up unfinished hobby projects, I just write/test until arbitrarily sharing, or if I'm completely honest, potentially abandoning it. I may not 'git init' for months, let alone make any commits or push to any remotes.

Reasoning: skip SCM 'cost' by not making commits I'd squash and ignore, anyway. The project lifetime and iteration loop are both short enough that I don't need history, bisection, or redundancy. Yet.

Point being... priorities vary. Not to make a judgement here, I just don't think the number of commits makes for a very good LLM purity test.


I literally keep this in my bash history so i can press up once and hit enter to commit: `git add *; git commit -m "changes"; git push origin main;`

I use git as backup and commit like every half an hour... but make sure to give proper commit message once a certain milestone have been reached.

Im also with the author on this on squashing all these commits into a new commit and then pushing it in one go as init commit before going public.


you should push to a private working branch- and freqently. But, when merging your changes to a central branch you should squash all the intermediate commits and just provide one commit with the asked for change.

Enshrining "end of day commits", "oh, that didn't work" mistakes, etc is not only demoralizing for the developer(s), but it makes tracing changes all but impossible.


Yeah, I don't care about that for my tiny hobby projects which are used by no one. XD

> I don't expect each commit to be pristine, working version.

I guess this is the difference, I expect the commit to represent a somewhat working version, at least when it's in upstream, locally it doesn't matter that much.

> Why do this, what is the advantage?

Cleaner I suppose. Doesn't make sense to have 10 commits whereas 9 are broken half-finished, and 10 is the only one that works, then I'd just rather have one larger commit.

> they would like to have clean commit histories but why for a personal project?

Not sure why it'd matter if it's personal, open source, corporate or anything else, I want my git log clean so I can do `git log --short` and actually understand what I'm seeing. If there is 4-5 commits with "WIP almost working" between each proper commit, then that's too much noise for me, personally.

But this isn't something I'm dictating everyone to follow, just my personal preference after all.


> If there is 4-5 commits with "WIP almost working" between each proper commit, then that's too much noise for me, personally.

Yep, no excuse for this, feature branches exist for this very reason. wip commits -> git rebase -i master -> profit


Fair enough. Thanks for the clarification. Personally, I think, everything before a versioned release (even something like 0.1) can be messy. But from your point I can see it that a cleaner history will have advantages.

Further, I guess if author is expecting contributions to the code in the future, it might be more "professional" for the commits to only the ones which are relevant.

My own projects, I consider, are just for my own learning and understanding so I never cared about this, but I do see the point now.

Regardless, I think it still remains a reasonable sign of someone doing one-shot agent-driven code generation.


One point I missed, that might be the most important, since I don't care about it looking "professional" or not, only care about how useful and usable something is: if you have commits with the codebase being in a broken state, then `git bisect` becomes essentially useless (or very cumbersome to use), which will make it kind of tricky to track down regressions unless you'd like to go back to the manual way of tracking those down.

> Regardless, I think it still remains a reasonable sign of someone doing one-shot agent-driven code generation.

Yeah, why change your perception in the face of new evidence? :)


I see the point.

Regarding changing the perception, I think you did not understand the underlying distrust. I will try to use your examples.

It's a moderate size project. There are two scenarios: author used git/some VCS or they did not use it. If they did not use it, that's quite weird, but maybe fine. If they did use git, then perhaps they squashed commits. But at certain point they did exist. Let's assume all these commits were pristine. It's 16K loc, so there must be decent number of these pristine commits that were squashed. But what was the harm in leaving them?

So these commits must have been made of both clean commits as well as broken commits. But we have seem this author likes to squash commits. Hmm, so why didn't they do it before and only towards the end?

Yes, I have been introduced to a new perception but it's the world does not work "if X, then not Y principles." And this is a case where the two things being discussed are not mutually exclusive like you are assuming. But I appreciate this conversation because I learnt importance and advantages of keeping clean commit history and I will take that into account next time reaching to the conclusion that it's just another one-shot LLM generated project. But nevertheless, I will always consider the latter as a reasonable possibility.

I hope the nuance is clear.


> I guess this is the difference, I expect the commit to represent a somewhat working version,

On a solo project I do the opposite: I make sure there is an error where I stopped last. Typically I put in in a call to the function that is needed next so i get a linker error.

6 months later when I go back to the project that link error tells me all I need to know about what comes next


How does that work out if you want to use `git bisect` to find regressions or similar things?

I dont do bisects on each individual branch. I'll bisect on master instead and find the offending merge.

From that point bisect is not needed.


Hello not the poster but I am BarraCUDA's author. I didn't use GIT for this. This is just one of a dozen compiler projects sitting in my folder. Hence the one large initial commit. I was only posting on github to get feedback from r/compilers and friends I knew.

The original test implementation of this for instance was written in OCaml before I landed on C being better for me.


Or first thousand commits were squashed. First public commit tells nothing about how this was developed. If I were to publish something that I have worked on my own for a long time, I would definitely squash all early commits into a single one just to be sure I don't accidentally leak something that I don't want to leak.

>leak what

For example when the commits were made. I would not like to share publicly for the whole world when I have worked with some project of mine. Commits themselves could also contain something that you don't want to share or commit messages.

At least I approach stuff differently depending if I am sharing it with whole world, with myself or with people who I trust.

Scrubbing git history when going from private to public should be seen totally normal.


Hmm I can see that. Some people are like that. I sometimes swear in my commit messages.

For me it's quite funny to sometimes read my older commit messages. To each of their own.

But my opinion on this is same as it is with other things that have become tell-tale signs of AI generated content. If something you used to do starts getting questioned as AI generated content, it's better to change that approach if you find it getting labelled as AI generated, offensive.


Leak what?

If you have for example a personal API key or credentials that you are using for testing, you throw it in a config file or hard code it at some point. Then you remove them. If you don't clean you git history those secrets are now exposed.

Timestamps

a lot of ppl dont use git. and just chuck stuff in there willynilly when they want to share it.

people are to keen to say something was produced with an LLM if they feel its something they cannot produce themselves readily..


I would be very concerned about someone working on a 16k loc codebase without a VCS.

Can you really say that unless you switched fields multiple times? Of course you'll pick up on math and physics faster in high school than in college or postgrad, but that's because the problems get way, way harder as you progress. I've found that even in my late 30s I can still easily pick up new skills outside my field of expertise as long as I start with the basics that could also be picked up by a high-schooler. I started learning a new language last year and thanks to modern study apps, I actually find it easier today. Of course it will still take a long time to become an expert, but I'm not sure it would need more total hours than if I had started 20 years ago. It just gets more difficult to allocate the necessary hours for learning.

> even in my late 30s I can still easily pick up new skills outside my field of expertise as long as I start with the basics that could also be picked up by a high-schooler

Same age and same experience. I am learning my third language, after acquiring my second to a fluent level in my early 30's (by living in a country where it's spoken). But it's an entirely different character set, and has nearly zero cognates. I'm sure some skills transferred from my second language learning but I'm massively enjoying it and don't feel bogged down.

I think a lot of it is managing my mental energy, I look at it as a finite per-day resource replenished by sleep. If I have a mentally heavy workday, or overly emotional day, I know my language skills will be sub-par and don't try too hard. I also approach my learning in the morning, when I have an excess of this energy, because my job will do a good job of getting it close to zero, regardless of the starting point.

I think we don't give enough credence to the mental toll of an adult life and corporate job, and how much that takes from us, versus when we were young.


Totally agree. Adult life is just mentally taxing. I'm more curious and more eager to learn now in my 30s than I was in any of my schooling. The learning isn't hard but the energy regulation is.

I think it's so easy for people to discount "mental energy" since culturally we don't often acknowledge it as a finite resource the same way we do physical energy. Well maybe the problem is we view them as separate things in the first place.

When I was younger I just didn't have to worry about so much stuff.


This thread is a really good point. I am in my late 50ies now im really good with computer hardware because I started when I was 11. But I started wanting to become a SCI-FI writer at 35 and it has been an up hill battle to get good at for all the reason described in this thread.

Agree immensely. As an adult, I worry about taxes, my health insurance, making doctor’s appointments, etc.

Totally of the same persuasion as you, I'll say I did hear a very good counter point when Magnus Carlsen said in an interview at 30 he feels he can't compute lines as deep as he could previously, and his edge is now his experience. That was rather convincing.

Most of the folklore around "neuroplasticity" I've found pretty underwhelming. But yeah, if even he says it at that level of consistent practice, that seems like a good yardstick.


As I've experienced getting older I've found it's more about the lack of available time and focus.

I don't have the hours of time a young person does and I don't have the focus, there are a lot of other thoughts, emotions and responsibilities competing for my attention.

Would love someone who's aware of the literature to throw their hat in the ring though.


It's entirely possible, but as I've approached and now passed 30 my improved patience, self-discipline, and self-knowledge has allowed me to pick up skills and wrap my head around things I bounced off of several times as a high-schooler, and the technical foundation I've built up in that time has helped me build more connections and understand things in more depth/from more angles.

I'm sure I'll slow down eventually, but I've always found that thinking fast is vastly overrated. I've always had an easy time understanding things quickly, but it still takes me time to understand them well.


> Can you really say that unless you switched fields multiple times?

I have ;-) far too many times! Even going back and taking undergrad math coursework that my engineering curriculum didn't have like Discrete Math or Statistics got a lot harder than calculus / differential equations was when I was younger. I felt like I got less out of each hour, and also couldn't put in as many hours - not just because I have more responsibilities, but also because my brain just gets tired after fewer hours.


>my brain just gets tired after fewer hours.

Have you tried creatine supplements? Especially if you're vegan or don't eat a lot of meat. Most people take it for muscle performance, but I found it insanely helpful for maintaining a sharp mind throughout a long day, especially with little sleep. That's also what the latest research starts to appreciate. I wouldn't be surprised to see recommended doses above 10g/day in elderly soon-ish since there are basically zero downsides even at much higher doses. In my early 30s I thought I lost my ability to pull all nighters because of increased tiredness, but now I feel like I can do even more than when I was 18. Most people greatly underestimate diet in general because they used to get away with anything when they were young.


Counterpoint: I've been taking creatine (10-15mg/day) for over 8 months and I can't say that I notice a difference at all.

That said, while I think that creatine's effects on cognitive performance are often overstated, creatine is scientifically proven to increase brain performance in some cases[1].

And fortunately, creatine is safe and very cheap[2], so maybe worth a try.

1. https://en.wikipedia.org/wiki/Creatine#Cognitive_performance

2. https://www.bulksupplements.com/products/creatine-monohydrat...


Did you have problems with tiredness and concentrating? Especially after only getting very little sleep for an extended time? Or did you already consume a lot of meat? There are many factors here that could potentially make it not worth your while. As with all supplements, a good diet and lots of rest can make them little more than expensive pee. But not everyone can have a perfect diet and lots of sleep all the time. Creatine is a supplement for a very particular lifestyle.

Yep, I bet you're exactly right.

> Did you have problems with tiredness and concentrating?

Yes. Still do.

> Especially after only getting very little sleep for an extended time?

No

> Or did you already consume a lot of meat?

Yes. I mean, I probably eat less meat than the average American, but certainly plenty. Beef, too (high in creatine).

> As with all supplements, a good diet and lots of rest can make them little more than expensive pee.

Yep, this probably describes me. It's still pretty cheap, so I still do it (especially since my kids take creatine as well).


>> Especially after only getting very little sleep for an extended time?

>No

Then I'm not really surprised. I also see no real cognitive effects when everything is well anyways. But this one study is what I found to be the most accurate finding overall beyond enhancing muscle power:

https://pmc.ncbi.nlm.nih.gov/articles/PMC10902318/

It's also super easy to test yourself.


Hell even my young brain got tired on statistics and algebra. Could part of this also be field mismatch?

I think it's just because hours spent learning by children often don't look like work to us. Like, when children are watching children's television or looking through those baby books with shapes and colors, they are studying. To them, that is learning. And I guarantee that in 1 hour of studying, I can learn every single color in Spanish, whereas a baby might need months of daily reading to finally understand it. But because we don't register it as studying, we would still say the baby learned language "effortlessly", while adults need to "study".

I'd argue I learn much faster now. I did study math and physics but I've found that those tools have accelerated a lot of learning and have a lot of compounding effects. Maybe mileage may vary but I suspect a strong base allows one to learn even faster.

Though only in last few years did I realize I was a fast learner. I thought I was slow because I'd say I didn't understand unless I had a deep understanding. But found that where I would not feel comfortable claiming understanding was a different threshold than others.

Where I've found math and physics helpful is in depth and abstraction. The science builds a good framework to dive deep into understanding and tease out the critical components. Science is a search algorithm in some sense. The math helps abstract, or generalize. To see the patterns and extend and use in ways beyond what was taught. That's where the real compounding happens and where I personally start to feel I understand things. But it requires the depth. But that's my framework and I'm sure there's a million good ones and a million better ones


Math and physics certainly allow you to learn fast in technical fields and that is my experience as well, but the other comment was more about completely unrelated fields like humanities where previous experience may not translate at all. For example an english speaker with a PhD in Physics will still have to start more or less from zero when learning japanese.

Strangely I've found there's still a lot of translative skills. But I think it is more to do with the approach than the base knowledge.

Physics definitely helps me learn things like a sport or driving a new vehicle. Forces me to think more about things like where force is being applied, positioning, paths, and many other things. I'll focus on the small things that compound and I think this makes the start a little slower but that makes things accelerate as my base isn't shaky as I'm moving forward.

But this also helps when I've doing things like learn a language. It's why I wrote the last paragraph like I did. The science side gives you the habit of breaking things down into their atomic units. The framework of build, attack/critique/deconstruct, rebuild. When I was younger I couldn't do this. If I were trying to do something like learn Chinese I'd first focus on just memorizing rather than focus on the radicals.

The iterative style of building up is such a useful framework. Naturally when forming a hypothesis you try to build something stable. But the most important thing to then do is attack it as hard as you can. This is a step most people don't take, but it's probably the most important one for finding truth. It damages and even sometimes completely destroys your construction. But when you rebuild it is better, it is stronger. The framework taught me the importance of doing the boring stuff like revisiting what I've already learned.

And that's also why I'm saying there's a million paths and I doubt my background that I'm leveraging is particularly advantageous. The base knowledge is helpful in the first example but nonexistent in the second. The utility still exists though because of the metaphysics/metamathematics. The framework of how to approach problems, to dive into details, to find what are the important parts, to navigate through mental spaces filled with many unknown unknowns. Maybe my neuroplasticity isn't as high as when I was a teen, but I sure didn't have the (mental) tools I have today and boy what a difference it makes.


I've definitely found that I could inhale information faster and memorize much faster as a teenager. I think I was even faster in college.

In my mid-30s, I'm definitely slower to pick up new things than in my college days, but I have much more mental discipline and patience, a broader base of knowledge to draw from, and maybe the biggest differentiator: a much more developed sense of priority and focus in order to get more benefit out of less time.


> I've definitely found that I could inhale information faster and memorize much faster as a teenager.

One of possible explanations is a passion. What really helps to memorize are emotions. If something triggers emotions or somehow connected to them, then you have much more probability of remembering it. I felt strong about mathematics as a teenager, any math result I found was a happy event. Or rather not any result, I felt nothing about trigonometry formulas and I struggled to remember them, I invented techniques to reconstruct them. Mostly those techniques had nothing to do with math. But the point is: I remember what I like and don't remember boring things. It is emotions at work.

> a much more developed sense of priority and focus in order to get more benefit out of less time.

Which is an evidence confirming my hypothesis: you are not as interested in knowledge you acquire as you are interested in results. You are juggling priorities and subject your learning process to some higher goals. No more learning driven by emotions, now rationality is the king, no place for emotions.

It doesn't mean that my hypothesis is true, I just mention it for a completeness: we don't really know what is the reason behind learning difficulties growing with age. AFAIK even neuroplasticity itself can be at least partially caused by emotions.


> I've found that even in my late 30s I can still easily pick up new skills outside my field of expertise as long as I start with the basics that could also be picked up by a high-schooler.

this was rather famously the technique of Jonas Salk to learn and master things, switch fields every so often, giving you a wide base of disciplines to apply to new fields.


Try something completely different from your field of expertise. For a typical nerd, this might be motor skills like in gymnastics. My experience is this takes a very long time to learn.

its 60-70% time and energy from lifestyle, financial security (subconscious anxiety),

someone like peter steinger can sit at home experimenting for hours and learn+create vastly more at the age of 48 than the average 30 year old

But, if you compare outright performance, the brain like any other part of the body is at its biological peak function in the late teens. Rachmaninov wrote most of his work as a teenager. mozart wrote first at age 8. zuckerberg create fb in undergrad. the youthful organism is full of vitality and ease


ooc, what are the modern study apps that you used?

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: