I spent the last month or so migrating over a hundred models to v2 and it has been a pretty pleasant experience. Free performance gains and a lot more clear/readable models. On top of this interacting with complex nested root models is now much more organized and `.model_validate(data).model_dump()` always works whereas before I had to do a lot of strange json loading and dumping surrounding instantiation for certain models.
The changes are overall good and the library has matured into something that seems like it will be stable for a long time to come.
Especially considering the fact that there are discussions in the issues in these repos from the codeowners who "don't condone illegal activity" actively providing guidance on how to use the stolen data to login to victim's accounts on various services.
I signed up for this today and am quite enjoying it. There are a couple of nitpicks, but overall it seems like a very simple and elegant solution to this particular problem.
'I have suspected for years that the STEM fields posed the most dangerous threat possible to the unopposed dominance of politically correct sociological idiocy over the entirety of the university environment, basing their claim to validity on recognition of something approximating a universally accessible objective reality.
...
But, make no mistake about it, scientists, technologists, engineers and mathematicians: your famous immunity to political concerns will not protect you against what is coming fast over the next five or so years: wake up, pay attention, or perish, along with your legacy. Whatever you might offer the broader culture in terms of general value will be swept aside with little caution by those who regard the very axioms of your field as intolerable truly because of the difficulty in comprehending them and considered publicly as unacceptably exclusionary, unitary and unconcerned with sociological “realities.”'
I would use it in a heart beat if there were an option for a one time license / activation fee and the ability to use it offline without associating the graphs with an account & communicating them back to a central server. my guess is that there may be more folks like me who work at companies that require a certain level of anonymity or security regarding sensitive information like database schemas. just a thought!
I'd echo this - I know a decent amount of DBAs/data devs who need to generate an ERD once a year for some giant overview thing for their new VP or whatever, and Visio just doesnt cut it, graphiz has a million ways to slice it, etc.
Maybe that market isn't that interesting to you, but the value prop for 100 bucks a month (since most real schemas that have the problem of requiring visualization are going to be >50 tables) for one schema that I then get to go cancel or w/e isnt that strong - It'd be enough that I'd consider writing that graphiz layer I keep screwing around with.
Thanks, good to know of the alternate constraints in other companies. Pricing aside, indeed it would be a different challenge tech-wise to have this as an offline tool.
My motivation for building this stemmed from the use-case of smaller-medium dev teams. We were using offline tools (e.g. MySQL Workbench) as part of our dev process and trying to keep it updated as documentation. Which was quite a nightmare to keep in sync between different devs. In this case having a central server was the silver bullet.
Curious - do you all currently use other tools (eg: workbench) for this?
It could be interesting to separate the visualization functionality from the syncing/sharing aspect.
For example, if you store the schema representation as a logical dump (CREATE statements) in a git repo, syncing/sharing becomes trivial. This also provides a branch workflow for collaborative editing, and the commit history serves as an authoritative changelog.
From this point of view, it could be compelling to have an offline visualization tool that simply operates on top of the current local filesystem state of a repo. Ideally this could be paired with a self-hosted server/daemon that can generate a visualization of any arbitrary commit of a remote repo on GitHub, GitLab, etc.
Disclosure: I'm the author of an open source schema management tool https://skeema.io which is designed to support a repo-driven workflow for DDL execution. So I have a heavy bias towards storing schemas declaratively, as repos of CREATE statements.
> Pricing aside, indeed it would be a different challenge tech-wise to have this as an offline tool.
Maybe not an offline tool, then, but rather a hostable server (or isolated enterprise deployment, if you must), rather than a central cloud service. A virtual-appliance Docker image (that you can keep updated upstream) would be ideal, I think.
What’s sensitive about a (normalized) database schema?
I can see a schema definition being “secret sauce” (i.e. a competitive advantage), but I can’t see it being literally dangerous for the company to publish (e.g. because it contains customer PII), unless you’re doing something very strange.
...in which case, that makes me want to know about the schema even more! There’s probably some interesting lessons in there, if just “don’t do this; we deeply regret that we did.”
I think lots of people would be rather embarrassed to post their company's database structures. And lots of databases have table prefixes that can easily be traced to a company or product.
It can be a security risk. For example, imagine if a popular web framework or ORM is found to have an exploit involving some particular data type, when combined with auto-generated HTML forms. If the companies using the framework are known, and their DB schemas are publicly available, this could be a huge target for attackers.
I'd imagine it can also be a legal concern. For example, a schema may reveal presence of a soft-delete column, which conceptually violates GDPR. If the schema is made public, this could cause unwanted legal attention, even if the column is no longer actively used by any application code.
That’s not possible. The chip used in that model was limited to 16GB. I worked in IT at that time and DB admins begging me to get them a 15” at that time with 32GB of RAM. I told them the chip from Intel was the limiting factor. When Apple updated a year or two ago, they moved to the next version of the chip which allowed 32GB. So yeah, 32GB in 2015 was not possible.
Interesting. Any guesses why anyone would have a sealed one, given that anyone could assume it would depreciate relatively quickly? (It's not really a "hold" commodity.)
Also if it's not too personal, roughly how much did you pay?
The going rate for them when I picked it up was around 1,800 - 2500. Check past/completed listings for sealed 2015 macbook pros. The higher end ones are rare, but they show up occasionally.
beyond the technical aspects of what was going to be done with kubrick's napoleon, it seems as though lot of the same ideas from this film were encoded, in reverse, into barry lyndon.
the more i read about napoleon, and what could have existed had it been allowed to be made, the more i come to appreciate what was actually accomplished with barry lyndon.
The changes are overall good and the library has matured into something that seems like it will be stable for a long time to come.
Spend an hour with this https://www.youtube.com/watch?v=ok8bF8M7gjk and never look back.
Also https://www.youtube.com/watch?v=4yUXPZGhIX8 specifically for migrating from v1 to v2.