I guess that depends on what you define productivity as. It seems that you have a preconceived narrow definition. If I pay millions of dollars in taxes that fund a war, who is to say that was more productive than planting a tree and taking a nap.
If you need to bypass censorship, you'll need a tool specifically designed for anti-censorship, rather than any one repurposed for that.
Since China has the most advanced network censorship, the Chinese have also invented the most advanced anti-censorship tools.
The first generation is shadowsocks. It basically encrypts the traffic from the beginning without any handshakes, so DPI cannot find out its nature. This is very simple and fast and should suffice in most places.
The second generation is the Trojan protocol. The lack of a handshake in shadowsocks is also a distinguishing feature that may alert the censor and the censor can decide to block shadowsocks traffic based on suspicions alone. Trojan instead tries to blend in the vast amount of HTTPS traffic over the Internet by pretending to be a normal Web server protected by HTTPS.
After Trojan, a plethora of protocol based on TLS camouflaging have been invented.
1. Add padding to avoid the TLS-in-TLS traffic characteristics in the original Trojan protocol. Protocols: XTLS-VLESS-VISION.
2. Use QUIC instead of TCP+TLS for better performance (very visible if your latency to your tunnel server is high). Protocols: Hysteria2 and TUIC.
3. Multiplex multiple proxy sessions in one TCP connection. Protocols: h2mux, smux, yamux.
4. Steal other websites' certificates. Protocols: ShadowTLS, ShadowQUIC, XTLS-REALITY.
Oh, and there is masking UDP traffic as ICMP traffic or TCP traffic to bypass ISP's QoS if you are proxying traffic through QUIC. Example: phantun.
Oddly, I thought this discussion would be about actual toddlers.
There is a way to win an argument with a toddler. You find out what's bothering them, usually something emotional, and you validate it. "Yes! It's fun to stay up late! Yes! You don't want to eat your vegetables!" Once they feel heard, you've got a shot at getting them to do what you want.
That's a good way to win an argument with a non-toddler as well. Acknowledge that what they want is legitimate (if it is). Concede points of agreement. Talk about shared goals. Only then talk about a different path to the solution.
Language seems to be confused with logic or common sense.
We've observed it previously in psychiatry(and modern journalism, but here I digress) but LLMs have made it obvious that grammatically correct, naturally flowing language requires a "world" model of the language and close to nothing of reality, spatial understanding? social clues? common sense logic? or mathematical logic? All optional.
I'd suggest we call the LLM language fundament a "Word Model"(not a typo).
Trying to distil a world model out of the word model. A suitable starting point for a modern remake of Plato's cave.
That's a neat blog all around. Lots of interesting stuff to poke around in.
I think it's okay to abandon things, and you can certainly learn things and reuse parts from abandoned projects. For me, a breakthrough moment was when I decided to make things so small that I could finish them. It helped me develop the skill of finishing things, which is a separate skill that's hard to learn, because it only happens at the end of a process so long and hard you almost never make it there. All my friends who are making video games start by writing their own engine, and get burnt out somewhere around the point where they're making a level editor. They learn a lot about things like tooling (which, coincidentally, is a lot like what they already knew how to do), but never actually make the game. It'd be like learning stone masonry by building a cathedral—you won't live to see the end. Start so small that you can't fail, then work your way up to bigger and bigger projects.
A timeless piece of advice comes toward the end, where she describes all the smart, young professionals out there who are looking for positive leadership. It means respect those above and keep them informed, and look after your crew.
The zeitgeist of the time was shifting emphasis onto management (MBA type stuff) but the army had a saying; you can't manage a soldier into war, you lead them.
“...it's like this. Sometimes, when you've a very long street ahead of you, you think how terribly long it is and feel sure you'll never get it swept. And then you start to hurry. You work faster and faster and every time you look up there seems to be just as much left to sweep as before, and you try even harder, and you panic, and in the end you're out of breath and have to stop--and still the street stretches away in front of you. That's not the way to do it.
You must never think of the whole street at once, understand? You must only concentrate on the next step, the next breath, the next stroke of the broom, and the next, and the next. Nothing else.
That way you enjoy your work, which is important, because then you make a good job of it. And that's how it ought to be.
And all at once, before you know it, you find you've swept the whole street clean, bit by bit. what's more, you aren't out of breath. That's important, too...”
― Michael Ende, Momo
Reinertsen's books are very, very good. The most recent and most up-to-date is Principles of Product Development Flow, which that chart came from. The previous one, Managing the Design Factory, is pretty similar and I think it's a better read.
Another great takeaway from it is to prioritize things by cost of delay, or even better, what he calls Weighted Shortest Job First, where you divide the cost of delay by the expected length of the task. The only problem is that the same people that want to get oddly formal and inappropriately rigorous with things like story points will want to turn cost-of-delay into an accounting exercise - or object that it's impossible because they don't have the accounting system for it - which misses the point entirely.
I hadn't thought about that in movie context before, but it totally makes sense.
I've worked with other developers that want to build high fidelity wire frames, sometimes in the actual UI framework, probably because they can (and it's "easy"). I always push back against that, in favor of using whiteboard or Sharpies. The low-fidelity brings better feedback and discussion: focused on layout and flow, not spacing and colors. Psychologically it also feels temporary, giving permission for others to suggest a completely different approach without thinking they're tossing out more than a few minutes of work.
I think in the artistic context it extends further, too: if you show something too detailed it can anchor it in people's minds and stifle their creativity. Most people experience this in an ironically similar way: consider how you picture the characters of a book differently depending on if you watched the movie first or not.
I dont know if they teach the Theory of Bounded Rationality anymore but it helped me when I was younger and got thrown into similar complex no win situations.
The tendency is to think ALL complex problems can be solved if I just have the right info, the right skill, the right people, the right resources, enough time etc etc. But for some problems the stars will not align. In those cases what do you do?
You have 2 option - 1. pick a Simpler problem where u do have the info, skill, resources, people, time to ensure the outcome is going to be positive
2. pick the complex problem but accept you are not going to solve it completely.
The best time to plant a tree is twenty years ago. The second best time is today.
I don't know why anyone would sit around moping about "If only this were thirty years ago!" If it seems like your idea would be "easy" or something -- if only it were thirty years ago -- then most likely it's because we are where we are and you know what you know now that you wouldn't have known then.
It's like all the people who say things like "Youth is wasted on the young" and "I wish I had known what I know now back when I was seventeen." Yeah, you know it at all because you aren't seventeen.
Great teachers(if you can find them): Andrej Karpathy, Andrew Gelman & Ben Goodrich(Columbia), Subbarao Khambampati(ASU) to name a few I know of.
Go Where the hard problems are (or find someone who is doing it): If you don't have a good intuition where to get good problems to practice on for a pay, choose a place to work where a Data Scientist is not just building dashboards/analytics but the company/team relies on them for answer to questions like: "What goals should we set for the next half based on what you see"?
ML practitioner (read: use ML tech to do/debug X) different from ML Engineer (Read: Implement ML algorithm X e2e on data ) is different from Applied Statistician (think marketing sciences or powering experiments like A/B tests): All three areas of work in different areas of ML in one form or another. But make sure what you want to work in is clear in your head and your expectations from it.
A lot of ML/Stats can be not with big data and yet really intuitive: I would say look for a problem domain in social/life/pharma/eco/political/survey/edtech sciences. They are full of intuitive models that need to be explainable and are often debuggable. An example here is usage of Stan software for Multilevel/Heirarchical Regression problems. Training here also makes you a great DS.
On top of that, vast majority of engineers and researchers who had joined the field, only did it in the last few years.
While, like with many other fields, it takes decades to get to a level of a well-rounded expert. One paper a day, one or two projects a year. It just takes time. No matter how brilliant or talented you are.
And then the research moves on. And more is different. A GFLOPS shift to TFLOPS and then PFLOPS over a single decade is a seismic shift.
You'll have an absolute blast reading Red Notice by Browder. It's about a hedge fund guy that ends up in Russia during the privatization period, quickly realizes the country is getting looted and wants a big slice for himself. It's a true-ish story, written like a spy novel, with many fascinating details about this unique period in history.
My experience has been that the people opposed to types won't be convinced to start liking them by anything you can tell them or have them read. In all of the cases where I've seen Sorbet be adopted, the process looked like this:
1. Ambitious team who wants types does work to get the initial version passing in CI. Importantly, it's only checking at `# typed: false`, which basically only checks for missing constants and syntax errors.
2. That initial version sits silently in the codebase over a period of days or weeks. If new errors are introduced, it pings the enthusiastic Sorbet adoption team; they figure out whether it caught a real bug or whether the tooling could be improved. It does not ping the unsuspecting user yet.
3. Repeat until the pings are only high-signal pings
4. Turn Sorbet on in enforcing mode in CI. It's still only checking at `# typed: false` everywhere, but now individual teams can start to put `# typed: true` or higher in the files they care about.
5. Double check that at this point it's easy to configure whatever editor(s) your team uses to have Sorbet in the editor. Sorbet exposes an LSP server behind the `--lsp` flag, and publishes a VS Code extension for people who want a one-click solution.
6. Now the important part: show them how good Sorbet is, don't tell them. Fire up Sorbet on your codebase, delete something, and watch as the error list populates instantly. Jump to definition on a constant. Try autocompleting something.
In my experience trying to bring static types to Ruby users, seeing is really believing, and I've seen the same story play out in just about every case.
One final note: be supportive. Advertise one place for people to ask questions and get quick responses. Admit that you will likely be overworked for a bit until it takes off. But in the long run as it spreads, other teammates will start to help out with the evangelism as the benefits spread outward.
Obviously using a dedicated instruction is fastest in normal cases.
But if you need to implement popcount or many other bit manipulation algorithms in software, a good book to look at is "Hacker's Delight" by Henry S. Warren, Jr, 2003.
"Hacker's Delight' page 65+ discuss "Counting 1-bits" (population counts). There are a lot of software algorithms to do this.
One approach is to set each 2-bit field to the count of 2 1-bit fields, then each 4-bit field to the count of 2 2-bit fields, etc., like this:
> Nothing wrong with inhibiting growth in return for long term stability
For long-term plans to pay off, they must survive a series of short terms. Criminal gangs and dictators don’t ignore the long term because they’re stupid. They ignore them because they must. A drug gang practicing classical tradecraft would be decimated by one coördinating electronically. The latter will be caught faster. But a series of short-term motivated actors is the equilibrium state of illicit and physical trading systems.
This is exactly the problem with all cryptocurrency currently. It’s a massive user experience issue, in the sense that users have to experience the technical bullshit of how the currencies work, completely missing the brilliant part of real money: it just works. I hand people money, they give me things. I swipe my credit card, I get things.
I can’t remember who aid it originally, but there’s a great test you can give to any statement or idea. Just immediately ask “who cares?” The best product demos I’ve ever seen answer who cares in each part of the pitch. The worst ones just rattle off mumbo jumbo forever.
This has been my experience too, but with a slight difference - the new feature is very often an ask from sales, who are trying to close either a big new sale, or a big renewal. The prospect/customer demands some obscure feature that like 1% of our users will use (at most), but they’re a big prospect/customer. Product/UX/dev push back, say we should be working on features, performance and reliability that will benefit most customers, but it’s hard to come up with the precise revenue impact of this. Sales are better at convincing execs, and “this will help us close a $1 mil/year deal” is very tangible, so the new feature gets built.
This is kind of a symptom of “the buyer isn’t the user”, though. Often we build these features, then monitor usage, and the customer that demanded it doesn’t even use it! Or we build it, and it doesn’t matter, we still don’t close the sale.
I think the core issue is “building for users” and “building to close specific sales” are often strongly at odds with one another, and most of the time “building to close specific sales” wins.
Too much multitasking and producing software for years has made me feel this weird thing:
Which is that all accomplishments are meaningless in the end. And we are just getting older. Once a person dies they aren’t experiencing any stuff that came as a result of their effort. They may as well have just messed around and had a family sooner, or traveled, or not. It’s all meaningless anyway.
I can’t shake it. Anyone felt this way all the time? Any advice?
If it helps, just about everything is like that if you look closely. Processors have side channel attacks, RAM has rowhammer which recently turned out to be a real thing, digital electronics in general turn out to have analog side effects, time and space are both basically impossible for computers to represent precisely (see: falsehoods programmers believe about *). We should do what we can, but life goes on:)
Try adding some random stuff that you get for basically free with your choice of backend / implementation.
This will waste their time if its a harder problem with their backend / implementation.
Another fun thing you can do is using something that looks like a 3rd-party service/API, but is really just another domain controlled by your company. Make it something specific to your business, so they'll be tempted to use it.
IF they use it, the least you can do is terminate their access at an inconvenient time. This will frustrate their customers as they scramble to do their own implementation.
Final thing you can do is not doing incremental updates, but instead doing big rollouts with many features at once.
That way they'll always be weeks to months behind you.
Basically, when someone is following you, mine the path to the point that it's cheaper and more effective for them to find their own.
Edit: Bonus round is talking about some near-useless feature on your dev blog that would be hilariously expensive and complicated to build, without actually building it. Hope they waste time on it.