I love the LLM tech and use them everyday for coding. I don’t like calling them AI. We can definitely argue LLMs are not just rearranging code. But let’s look at some evidence that shows otherwise. Last year NYT lawsuit that show llms has memorized most of the news text, you had see those examples. Recent not-yet peer reviewed academic paper “Language Models are Injective and Hence Invertible “ shows llms just memorized training data. Also this https://youtu.be/O7BI4jfEFwA?si=rjAi5KStXfURl65q recent defcon33 talk shows so much ways you can get training data out. Given all these, it’s hard to believe they are intelligently generating code.
So this is the reality we're living in now, where bot farms have become normalized? I always associated bot farms with authoritarian regimes like Russia and China, but are we becoming the same thing? And VC funds are actually backing this? I hope I'm not the only one who finds this completely insane. I can't even listen to the a16z podcast anymore; my mind now permanently associates them with bot farms. These are the news that makes me think does people ever think about moral values and ethics.
Gorilla marketing and bot farms have been around for a very long time. There are bot farms to amplify all sorts of messages. A friend who is a 'performance artest' paid almost nothing to have his linked-in account amplified by a large number of nonsense accounts on that platform. 'creators' frequently buy followers so 'the algorythm' will surface their 'content'. My first roommate 20 years ago almost got a job riding the train and having 'organic' conversations about how great some product is. My solution is to buy nothing and poke and laugh at everything. (Edited for spelling)
I take it as greptile folks know LOC metric is in no way a metric that can be correlated to productivity in LLM era. But putting aside that just knowing how much code is going thru their system seems interesting enough to read the report. Thanks for the dot matrix report.
Most crawlers have no concept of what that is. They will follow links to this site and then follow links out of this site even after being told not to [1]. The majority of crawlers follow zero rules, RFC's, etc... The few platforms that do follow standards and rules are akin to a law abiding citizen in Mos Eisley.
Yes. It's hard to explain the experience of hosting a website since 2023.
A crazy amount of really dumb bots loading every url on the website in a full headless browser, with default Chrome user-agent string. All different ip addresses, various countries and ASNs.
These crawlers are completely automated and simply crawl _everything_ and don't care at all if there's value in what they're crawling or if there's duplicate content, etc.
There's no attempt at efficiency, just blindly crawl the entire internet 24/7. Every page load (1 per second or more?) is from a different ip address.
Given that major memory manufacturers are abandoning consumer RAM production to focus on HBM for AI data centers, can we make a prediction that HBM prices will eventually fall enough to make it viable for consumer hardware?
Great webapp. There is a similar app that I love to scroll through from time to time. Its free and needs no internet connection.
https://apps.apple.com/us/app/universe-in-a-nutshell/id15263...
The range of size in the universe, from the tiniest particles to the epic galaxies - we take you on a journey of size that lets you explore it all with a single swipe.
I have not shared it with many people. But one of my most wanted feature is to completely share by photos with my partner. None of the services I tried (Plex, Synology Photos) had it. In Immich, it’s just a flip of a button.
Flip a switch and then what, are you getting a isolated public URL to share? Or you have your infrastructure exposed to the internet and the shared URL is pointing to your actual server where the data is hosted?
I will explain my use case. We use iPhones in our family. We have a 2TB iCloud plan, but we have around 8TB of media from our phones. So I started using boredazfcuk/docker-icloudpd to download iCloud photos daily and keep only the past 2 years of media on our phones for iCloud features.
I wanted a separate app to view these TBs of media on my phone, tried many services, and settled on Immich. Whenever my wife and I want to share photos, we usually have to create albums and send links or share media through messages. That is a very painful way to view each other's media. I wanted a service to just share all our photos with each other so that they would be in the same timeline. There was none available, and I had seen people complain about that on HN.
Immich has that feature, where you can select who to share the whole library with. Enabling that I can see all my photos and my wife's photos in the same timeline and same for my wife. Immich lets everyone log in with their own credentials, and it's hosted on Coolify with a Cloudflare Tunnel.
> you have your infrastructure exposed to the internet and the shared URL is pointing to your actual server where the data is hosted
I think the previous commenter misunderstood your question, this is the answer (you can also put it behind something like cloudflared tunnels).
Immich is a service like any other running on your server, if you want it exposed to the internet you need to do it yourself (get a domain, expose the service to the internet via your home ip or a tunnel like cloudflared, and link that to your domain).
After that, Immich allows you to share public folders (anyone with the link can see the album, no auth), or private folders (people have to auth with your immich server, you either create an account for them since you're the admin, or set up oauth with automatic account creation).
Ugreen has it. It has conditional albums in which one can setup rules like person, file type, location, anniversary and more and share a live album. Or leave all params empty and simply mirror the entire library.
reply