-> rm expected
Run command? [Y/n]
rm: cannot remove 'not': No such file or directory
rm: cannot remove ''$'\b\b\b\b': No such file or directory
rm: cannot remove 'expected': No such file or directory
I updated to fix that, thanks for pointing it out. It had to do with echo printing the command with your backspace characters escaped. See if you can break it now, it's interesting how many weird cases exist in tty's.
Heredocs are a little odd, because you can't see what they might be piping to.
This script, for example looks sort of innocuous when run through your tool because it's not obvious the HEREDOC is going to the stdin of a Perl interpreter. Your tool shows them like they are two separate things that don't do much by themselves.
Looking at the script itself, it's more obvious.
#!/bin/sh
cat<<'EOF'|perl -nE'BEGIN{shift(@ARGV)}s#(.*)#$1#ee' /dev/null
say "hello"; #arbitrary perl code
EOF
That's probably a nit, really, though. I don't know that anyone would target it on purpose.
Yup, at that point it's within the scope of bash's debugger. It shows the command that is actually being run, so it expands globs, shows the command within if predicates, and so on. If bash shows a command that isn't actually about to run, that is a bash bug.
What would be amazing is a tool that analyses the script first, figures out folders and files (and networking) it influences and allows to sandbox it accordingly.
This script wants to modify:
- /usr/local/program/*
- /etc/program/*
- $HOME/.program
Do you want to execute this? [Yes/No]
..because you know, what happens when you execute a script that does rm -rf /usr in the 100th step?
In its full generality this runs afoul of the halting problem.
That doesn't mean what you want is completely unattainable, you just need to figure out whether you're okay with false positives, false negatives, or your tool just giving up on certain scripts (or some combination thereof).
I would be fine with a static analyser doing the last one (giving up in doubt), considering that install scripts are a smaller subset of all possible shell scripts.
Such a static analyser would have two interesting aspects: on the end user side, the one mentioned of outputting the touched paths, and also doubling as being a linter for the script developer.
Or just raising attention to the weird commands that trips its analysis up, just in case they are path obfuscation. That should be easy to spot for the admin...
You could do this by running your script pivot mounted into a namespace that mounts your "real" filesystem as readonly and layers with overlayfs to log changes. You can then terminate the script if the overlay diff gets too large (I assume on a 100GB disk you don't want 60GB of changes, and in any case you could tell it what to expect beforehand). That saves you having to do all this complicated analysing for files and folders and replaces it with something relatively foolproof.
Not without danger or failure if the scripts depend on internet connectivity, since they could either ex-filtrate data or change their behavior based on the connection being present or not.
Such can be easily implemented on top of Docker filesystem overlays/snapshots. You just run the script in question e.g. in fresh Ubuntu container and then compare overlay directories to see what changed.
If you are happy with running it only once in the container, yes.
But if you run it first in the container to see whether is does anything bad and then run it on the host (or a more valuable container), no.
The script might check whether it runs in a container. It might depend on the wall clock time. On /dev/urandom, whatever. As somebody already mentioned, the halting problem. No can do.
Very difficult to do in any kind of robust way. A script can run all kinds of things and use myriad forms of obfuscation, causing all kinds of obscure side effects.
When trying OPs code out, I had all the "linux binaries" in mind, aka all the shitty self-unpacking installers that concat their binaries and dump it in /tmp before executing it.
(you know, like proprietary drivers almost always do)
It would be a huge improvement for sysadmins if a linter could be run in advance of executing a shell script, and use chroot and other sandboxing like creating a user without net cap rights etc in case it found something potentially malicious.
This would still be defeated by any script that is nondeterministic which is a real possibility if you're trying to defend against malicious scripts or against very poorly written scripts.
But the modifications might not be valid in the real system. For example, imagine a script that adds a new user to the system: in the container, it picks a new user ID that is free. A diff of the filesystem will show a new line being added to /etc/passwd - seems OK, right? But the user ID picked might clash with one on the real system, causing everything to fail when you apply the change.
The sandbox would provide a copy-on-write view of the actual filesystem (hence the possibility of data being stolen), so that scenario would work fine. (Though race conditions may be a concern.)
Indeed. If the person does not understand why/what is encoded by things like xxd or base64 or using tr to swap/filter characters, then one should hopefully pull the eject lever. When in doubt, one can sandbox scripts and see what they are in effect trying to do.
> When a command that is found to be a shell script is executed (see Shell Scripts), rbash turns off any restrictions in the shell spawned to execute the script.
Can you provide example of a scenario where this restricted shell is useful?
Oh, I'd say when you're running your own stuff, it's only guardrails. I don't think anybody's gonna say it should be any account's login shell or anything. Sure, there's the idea that attackers could break out of it with such a simple featurebug, but it's nice to be able to shed functionality when automating things that could go very wrong.
Yes, I was instantly reminded of the time I implemented the core functionality of the 'time' command in shellscript, only to find out about it months later.
Seconded. It's crazy that so few people seem to know about bashdb. I don't know of many other languages that are commonly used without using a debugger.
It would be interesting to have a shell that allowed transactions like a database and could list what files have been affected while in the transaction.
You could snapshot your filesystem, then run the script and diff against the snapshot. Isolating executables (even shell scripts) is really outside the scope of what a shell normally provides.
This sort of provides rollbacks but not isolation. You would have to rollback all chances that happened to the filesystem (or the whole system if you don't know what filesystems were touched by the program) during the period between snapshot and when you finish your inspection.
It would be interesting if you could mount the snapshot then attempt to merge in the changes to the live system once approved. I don't know if any filesystems that support merges though.
Yeah, a shell is never going to provide isolation. If you want isolation, then snapshot your filesystem, assign it to a VM and run the script there. But this isn't actually useful because:
1. You probably will need network to run whatever script this is. Once you give the script network access, you are open to a whole bunch of issues. Perhaps your ssh private keys leave your system for example.
2. If you don't give it network access, it probably won't do anything malicious. Most of these scripts exist just to download some "thing", install it, maybe run it, and maybe update an RC file. The malicious code might be in the executables downloaded by the script rather than the script itself.
3. Just because a script does something reasonable in a VM doesn't mean it isn't malicious and won't do something else when it is run on bare metal.
In the end, you have to trust whatever software you decide to run (scripts included). How you gain that trust is up to you. I would steer away from gaining that trust by running the script and seeing what happens. Personally, I just rely on the reputation of the source of the software.
Verifying that some script doesn't screw up the configuration of my machine is a different story. I hate it when some script decides to run "pip install" or some other thing that subverts my package manager. Here, taking a snapshot is a reasonable choice.
PowerShell technically does, though I think it is deprecated. It also seems to be less of a security feature and more a tool for keeping the system stable.
Reflections on trusting trust wouldn't really apply here. The script is not a compiler, is not even compiled, and can be easily understood by reading it. Unless you think there is some vulnerability specific to /bin/sh and this script the citation is just wrong.
One complication is that websites can hijack your copy buffer, and the text you paste isn't the text you copied. I avoid this by pasting into an editor, not directly into a shell.
newer versions of gnome-terminal have a feature where it will hold your paste buffer in the linefeed before executing anything, does not matter how long or how many line breaks there are. You can then inspect what you just paste into the terminal, even edit it, before actually executing it.
Excuse my ignorance but when are you copying commands from a site you don't trust? If I don't trust a site I don't run anything it suggests to me, copy hijacking or no.
I think the most realistic threat model right now is "subverted browser extension", which is effectively equivalent to internet-wide XSS. Luckily I've only been hit once, and with adware, but it's a risk.
It depends on whether or not your threat model includes threats likely to exploit this attack surface. I'm assuming this is why GP said that a browser extension isn't a threat model.
Realistically, I copy all sorts of commands from all sorts of sites. Some fool has a blog saying "Run `kubectl blahblah -o yaml`" and I copy `kubectl blahblah -o yaml` but he can then inject nonsense in it so when I paste it something else runs.
Fortunately, my terminal emulator doesn't run on paste.
Open your shell prompt, press ^X^E, paste the script into the opened editor. Check it for anything malicious, save and exit (or exit without saving if you don't want to execute it). The shell will execute the script.
If you are considering using this tool, then I would suggest that you seriously reevaluate your life choices. You should never run shell scripts without reading them first, ever. That is so irresponsible. Validating shell scripts will make you a more competent and informed worker. Tools like this breed incompetence, and encourage carelessness.
Yeah, this is an absurd argument. How do I know I can trust Linux unless I personally audit the whole kernel? How do I know if I can trust my processor without an x-ray machine?
I am a hacker by trade. My job is to literally exploit your weakness. If you can't be bothered to read a "shell script (!)" that you are running... good luck.
If that were true, then why did this post reach the front page of Hacker News? As Larry Wall stated, the three virtues of a programmer are: Laziness, impatience, and hubris. Tools like this embody all three.
There are a hundred reasons to use this script and your responses merely lack imagination:
* Ancient script written by people at the company no longer here that may encode a bunch of assumptions and lots of dead code
* Personal script that is not to the level of full production
* Run untrusted script in a constrained environment to see at what stage it does something ugly - this will bypass obfuscation based on adding lots of dead code
That's like 3 things I already thought of while writing this comment. In like the 90 s it took me to compose this. These ostentatiously dramatic comments of yours aren't that interesting. Hopefully coming generations of engineers will look at your comments and be like "I wish I wasn't like that".
I understand your point, but there is a big difference between running a 20 year old program written in C and running a shell script that someone with one or two years of experience hacked out in ten minutes. To answer your question, I do fuzz many of the GNU utilities that I use regularly, and I have discovered vulnerabilities that way. Of course it is unreasonable to read all of the code that runs in our operating systems, but it is not unreasonable to read shell scripts before you run them.
Plus the risk associated with running Linux or any of the executables is orders of magnitude lower than copy/pasting and running a bash script one found on the internet.
I want this to run my own shell scripts. I have a bunch of scripts that are halfway between "documentation" and "automation"; mostly the record of the last time I did X. Add a prompt to eval a command or two or change variables that are hard coded, and it's ipython for shell.