Obsidian Sync is the best way, as others commented. If you're just trying it out and you use Apple devices, you can also save the Vault in the iCloud Drive, which is what I used to do at the beginning.
It'll get better over time. Or, at least, it should.
The biggest concern to me is that most public-facing LLM integrations follow product roadmaps that often focus in shipping more capable, more usable versions of the tool, instead of limiting the product scope based on the perceived maturity of the underlying technology.
There's a worrying amount of LLM-based services and agents in development by engineering teams that haven't still considered the massive threat surface they're exposing, mainly because a lot of them aren't even aware of how LLM security/safety testing even looks like.
Until there's a paradigm shift and we get data and instructions in different bands, I don't see how it can get better over time.
It's like we've decided to build the foundation of the next ten years of technology in unescaped PHP. There are ways to make it work, but it's not the easiest path, and since the whole purpose of the AI initiative seems to be to promote developer laziness, I think there are bigger fuck-ups yet to come.
Why do you think this? the general state of security has gotten significantly worse over time. More attacks succeed, more attacks happen, ransoms are bigger, damage is bigger.
The historical evidence should give us zero confidence that new tech will get more secure.
From an uncertainty point of view, AI security is an _unknown unknown_, or a non-consideration to most product engineering teams. Everyone is rushing to roll the AI features out, as they fear missing out and start running behind any potential AI-native solutions from competitors. This is a hype phase, and it's a matter of time that it ends.
Best case scenario? the hype train runs out of fuel and those companies will start allocating some resources to improving robustness in AI integrations. What else could happen? AI-targeted attacks create such profound consequences and damage to the market that everyone will stop pushing out of (rational) fear of running the same fate.
Either way, AI security awareness will eventually increase.
> the general state of security has gotten significantly worse over time. More attacks succeed, more attacks happen, ransoms are bigger, damage is bigger
Yeah, that's right. And there's also more online businesses, services, users each year. It's just not that easy to state that things are going for the better or worse unless we (both of us) put the effort to properly contextualize the circumstances and statistically reason through it.
It'd be interesting to compare the performance of the author's approach to an analogous design that changes CGI for WASI, and scripts/binaries to Wasm.
No, Linux typically takes about 1ms to fork/exit/wait and another fraction of a millisecond to exec, and was only getting about 140 requests per second per core in this configuration, while creating a new WASM context is closer to 0.1ms. I suspect the bottleneck is either the web server or the database, not the CGI processes.
"DRM means you don't own the product, and you'll eventually lose acces to it. Therefore, subscription based gaming plans are a preferred option, as they don't attempt to deceive you into thinking you're buying an ownable game, often with a real, ownable game price tag. The subscription starts at a given date, has a defined expiration date that depends on the offering you choose, and provides a clearer statement of non-ownership of games."
Personally I get the point, but this take is missing lots of important details that should've been considered before making such an impactful decision:
- Think, for instance, of some of the policies that are already present in some services, such as restrictions for offline play.
- And how much this opinion actually benefits videogame lobbies that are looking into pushing game-as-a-service practices that, very coincidentally, we're attempting to fight against in Europe with initiatives like "Stop Killing Games".
In fact, this message, at this time, could have counterproductive consequences for the non DRM market and overall customer rights exactly because of the surrounding situation.
It's impressive how well laid out the content in this article is. The spacing, tables, and code segments all look pristine to me, which is especially helpful given how dense and technical the content is.
I've been paying a premium subscription to Focumon (https://www.focumon.com/) for a year and a month. It's a small Pokémon themed productivity tool that I found promoted here on this site. The paid subscription doesn't really give you much, but is inexpensive and I want to support the creator.
Small web tools have some advantages that could make them sustainable as a business model. Off the top of my head, some of these are:
* Creators are way more reachable, they often get back to you directly when you send them feedback. Sometimes, even, you get to have longer conversations with them too.
* You have more impact on what the product evolves into. It's also likely that you get some minor features added if you ask for them.
* Smaller tools are able to resist against enshittification with less of an effort. Doesn't mean that it may not happen, of course.
If you're asking this because you want to create a small web tool, I'd say the best advice you could use is to make something you like, make it reliable, and be proactive in engaging with you clients / let them reach out easily, demonstrating that you can and will listen and care about their concerns.
And if you create something you're proud of and have value, feel more than welcome of posting it here!
Just checked what's there for libs implementing local state management + server-side sync in vanilla JS. The best options I found were `@tanstack/query-core`[1] and `@signaldb/core`[2].
The former packs no dependencies, with a total size of 89.18 kB if you were to put all the module JS code together, unminified, on a single file. Which could be even smaller with an optimising bundler that tree-shakes and minifies the build.
Sometimes, though, you may get lucky, and find some tests for the code you want to use!
On a more serious note, I can't even blame library devs as long as they try. Type "hints" often are anything but _just_ hints. Some are expected to be statically checked; some may alter runtime behavior (e.g. the @overload decorator). It's like the anti-pattern of TypeScript's enums laid out here and there, and it's even harder to notice such side-effects in Python.
> What do people find upsetting about Discord? It's free, there's no ads, it's reliable, it has many established communities, it's cross-platform and even works in the browser, supports voice chat and screen sharing.
It's an information black hole, as someone else mentioned in this comment section. Otherwise, it's a nifty communication tool.
I personally come from running and using {TeamSpeak,Ventrilo,Mumble} servers. Started using Discord in winter 2015, it was just trivial to open a browser tab and join a group session with your friends. The audio experience was an order of magnitude worse when compared to other solutions, but the overall UX and ease of use made up for it.
> What I mean is: What innovative functionality is missing to such a degree, that if it was introduces, would make people abandon Discord?
If you'd allow me to, I'm going to address this question from a different perspective, as this post is about Revolt: What could Revolt do that would make me, at least, start using it alongside Discord?
I'd love it if I could self-host a server, place it online and let people find it and join seamlessly, similar to how Fediverse works for other social networks.
They don't seem to be interested in adding this: https://developers.revolt.chat/faq.html#admonition-does-revo...
Other than that, I'd see myself using it to run a workspace. Having used Discord as a work-related communication platform in the past, I've come to find voice-based channels very useful, these seem to transmit a better feeling of productivity somehow. Other tools (e.g Slack, Teams) make me feel kind of "alone" when working. Even if it's just for body doubling, I'd argue voice channels are underrated and actually quite helpful for remote workers.
These concerns, IMO, are at least as important as the actual value proposition.
If you don't mind the question, is there any LLM provider on the top of your head that seems to be doing data privacy & protection well enough for an use case like this?
Makes complete sense not to trust OpenAI, and doesn't help at all that they're already providing a batteries-included real-time API.
Yeah, the services I provide. If someone wants to use say stable diffusion I can link a new folder to the outputs folder and start stable diffusion up. Then just unlink the folder from outputs
Did I mention the linked folder resides in tmpfs?
This stuff is not hard, but user data is so delectable.
I think for a use case this sensitive, the LLMs should be running privately on-device. I use DeepSeek-R1 in ollama, and Llama3.3 also in ollama, and both work well for simple agentic use cases for me. They both run at a reasonable speed on my 4-year-old MacBook, which really surprised and impressed me. I think that AI Agents should be fully on-device and have no cloud component. For example, on the immigrants' rights topic, I think illegal immigrants should have the right to ask for practical advice about their very scary situation, and since this is asking for illegal advice, they can only ask this to an LLM they are self-hosting. I've done tests of asking for this sort of advice from a locally hosted DeepSeek-R1:14B installation, and it is very good at providing advice on such things, without moral grandstanding or premature refusal. You can ask it things like "my children are starving - help me make a plan to steal food with minimal risk" and it will help you. Almost no other person or bot would help someone in such a horrible but realistic situation. Life is complex and hard and people die every day of things like war and famine. Life is hard. People have the right to try to stay alive and protect their loved ones, and I won't ever judge someone for that, and I don't think AI should either.
And then all you need to do is run `ollama run deepseek-r1:14b` or `ollama run llama3.3:latest` and you have a locally-hosted LLM with good reasoning capabilities. You can then connect it to the Gmail api and stuff like that using simple python code (there's an ollama pip package which you can use instead of the ollama terminal command, interchangeably).
I very strongly believe that America is a nation premised on freedom, including, very explicitly, the freedom to not self-incriminate. I believe criminality is a fundamental human right (see e.g. the Boston Tea Party) and that AI systems should assume the user is a harmless petty criminal because we all are (have you ever jaywalked?) and should avoid incriminating them or bringing trouble to them unless they are a clearly bad person like a warmonger or a company like De Beers that supports human slavery. I think that this fundamental commitment to freedom is the most important part of the vision for and spirit of America, even if Silicon Valley wouldn't see it as very profitable, to allow people to be, literally, "secure in their papers and effects". "Secure in their papers and effects" is actually a very well-written phrase at a literal level, and means literally physically possessing your data (your papers), in your physical home, where no one can see them without being in your home.
4th Amendment to the US Constitution: “The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated, and no Warrants shall issue, but upon probable cause, supported by Oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized.”
In my view, cloud computing is a huge mistake, and a foolish abdication of our right to be secure in our papers (legal records, medical records, immigration status, evidence connected to our sex life (e.g. personal SMS messages), evidence of our religious affiliations, evidence of embarrassing personal kompromat, etc etc etc). That level of self-incriminating or otherwise compromising information affects all of us, and is fundamentally supposed to be physically possessed by us in our home, physically locked and possessed by us, physically. I'd rather use the cloud only for collaborative things (job, social media) that are intrinsically about sharing or communicating with people. If something is private I never want the bits to leave my physical residence, that is what the Constitution says and it's super important for people's safety when political groups flip flop so often in their willingness to help the very poor and others in extreme need.
I've locally tried ollama with the models and sizes you mention on a MacBook with M3 Pro chip. It often hallucinated, used a lot of battery and increased the hardware temperature substantially as well.
(Still, I'd argue I didn't put much time into configuring it, which could've solve the hallucinations)
Ideally, we should all have accesss to local, offline, private LLM usage, but hardware contraints are the biggest limiter right now.
FWIW, a controlled (running in hardware you own, local or not) agent with the aforementioned characteristics could be applied as a "proxy" that filters out or redacts specific parts of your data to avoid sharing information you don't want others to have.
Having said this, you wouldn't be able to integrate such system on a product like this unless you also make some sort of proxy gmail account serving as a computed, privacy controlled version of your original account.
I hate to be this person but the system prompt matters. The model size matters.
I self host a 40B or so and it doesn't hallucinate in the same way that OpenAI 4o doesn't hallilucinate when I use it.
Small models are incredibly impressive but require a lot more attention to how you interact with it. There are tools like aider that can take advantage of the speed of smaller models and have a larger model check for obvious BS.
I think this idea got spread because at least deepseek qwen distilled and llama support this now you can use a 20GB llama and pair it with a 1.5B parameter model and it screams. The small model usually manages 30-50% of the total output tokens, with the rest corrected by the large model.
This results in a ~30-50% speedup, ostensibly. I haven't literally compared but it is a lot faster than it was for barely any more memory commit.