Hacker Newsnew | past | comments | ask | show | jobs | submit | akx's commentslogin

Sounds like you're not familiar with https://docs.astral.sh/uv/ ...

It sounds to me like they are: `You know they've given up on backward comparability and version control, when the solution is: run everything in a VM, with its own installation.`

uv taking over basically ensures that dependencies won't become managed properly and nothing will work without uv


What do you mean with "basically ensures that dependencies won't become managed properly"?

This doesn't do the same thing though, since it's not Unicode aware.

    >>> 'x\u2009   a'.split()
    ['x', 'a']
    # incorrect; in bytes mode, `\S` doesn't know about unicode whitespace
    >>> list(re.finditer(br'\S+', 'x\u2009   a'.encode()))
    [<re.Match object; span=(0, 4), match=b'x\xe2\x80\x89'>, <re.Match object; span=(7, 8), match=b'a'>]
    # correct, in unicode mode
    >>> list(re.finditer(r'\S+', 'x\u2009   a'))
    [<re.Match object; span=(0, 1), match='x'>, <re.Match object; span=(5, 6), match='a'>]


OP's .split_ascii() doesn't handle U+2009 as well.

edit: OP's fully native C++ version using Pystd


Hmm? Which code are you looking at?


There's bound to be a way to turn a stream of bytes into a stream of unicode code points (at least I think that's what python is doing for strings). Though I'm explicitly not volunteering to write the code for it.


    import mmap, codecs

    from collections import Counter

    def word_count(filepath):

        freq = Counter()
    
        decode = codecs.getincrementaldecoder('utf-8')().decode
    
        with open(filepath, 'rb') as f, mmap.mmap(f.fileno(), 0, access=mmap.ACCESS_READ) as mm:
        
                for chunk in iter(lambda: mm.read(65536), b''):
            
                        freq.update(decode(chunk).split())
            
                    freq.update(decode(b'', final=True).split())
        
                return freq


Oh that's neat, though I might split this into two functions in most cases, no need to entangle opening the file and counting the words in a filelike object.

That's two neat tricks that I'm definitely adding to my bag of python trickery.


Sure, but making one string from the file contents is surely much better than having a separate string per word in the original data.

... Ah, but I suppose the existing code hasn't avoided that anyway. (It's also creating regex match objects, but those get disposed each time through the loop.) I don't know that there's really a way around that. Given the file is barely a KB, I rather doubt that the illustrated techniques are going to move the needle.

In fact, it looks as though the entire data structure (whether a dict, Counter etc.) should a relatively small part of the total reported memory usage. The rest seems to be internal Python stuff.


I dislike loading files into memory entirely, in fact I consider avoiding that one of the few interesting problems here (the other problem being the issue of counting words in a stream of bytes, without converting the whole thing to a string).

If you don't care about efficiency you can just do len(set(text.split())), but that's barely worth making a function for.


This inspired me to update my over-a-decade-old glitch generator https://akx.github.io/glitch2/ with camera input and a handful of new modules.


It's pretty good. And for once, a software-engineering-ly high-quality codebase, too!

All too often, new models' codebases are just a dump of code that installs half the universe in dependencies for no reason, etc.


If someone's curious about those particular constants, they're the PAL Y' matrix coefficients: https://en.wikipedia.org/wiki/Y%E2%80%B2UV#SDTV_with_BT.470


You'd think there was a LUT you could apply to the digital copies during playback to make it look (more) like the original...


These days, once you have https://docs.astral.sh/uv/ installed, `uvx --from service-ping-sping sping` is pretty much zero effort to run this software.


Except if you don't like pulling random programs from the internet.


> Is `uv format` supposed to be an alias for `ruff check`?

I'd imagine not, since `ruff format` and `ruff check` are separate things too.


That makes some more sense. I think I just misunderstood what Charlie was saying above.

But I'll also add another suggestion/ask. I think this could be improved

  $ ruff format --help
  Run the Ruff formatter on the given files or directories
  $ uv format --help
  Format Python code in the project
I think just a little more can go a long way. When --help is the docs instead of man I think there needs to be a bit more description. Just something like this tells users a lot more

  $ ruff format --help
  Formats the specified files. Acts as a drop-in replacement for black. 
  $ uv format --help
  Experimental uv formatting. Alias to `ruff format`
I think man/help pages are underappreciated. I know I'm not the only one that discovers new capabilities by reading them. Or even the double tab because I can't remember the flag name but see something I didn't notice before. Or maybe I did notice before but since the tool was new I focused on main features first. Having the ability to read enough information to figure out what these things do then and there really speeds up usage. When the help lines don't say much I often never explore them (unless there's some gut feeling). I know the browser exists, but when I'm using the tool I'm not in the browser.


This is a fun model for circuit-bending, because the voice style vectors are pretty small.

For instance, try adding `np.random.shuffle(ref_s[0])` after the line `ref_s = self.voices[voice]`...

EDIT: be careful with your system volume settings if you do this.


I'll help you along - this is the core function that Kitten ends up calling. Good luck!

https://github.com/espeak-ng/espeak-ng/blob/a4ca101c99de3534...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: