a.k.a don’t fear the terminal

As a (mainly) web developer, I develop software that is mostly running on Linux - whether it’s a virtual machine in the cloud, a docker container, or my NAS under the staircase. So - for me - using any other operating system for development would be an extra hindrance.

To be honest, I like the customisability and freedom that only a Linux distribution can provide, I’m on Linux for more than 20 years, and happy with it. But if you are new, I try to give you a quick overview, why the terminal could be a useful tool.


The shell is basically a program that takes commands from the user and gives them to the operating system to perform. There are different flavors of a shell - just as there are different operating systems.

Many shells are based on or extends the capabilities of the basic POSIX compilant Bourne sh shell (like bash, ksh or zsh) and there are many that doesn’t even try to be compatible with that (tcsh, fish, xonsh, …). Modern ones support command completion, syntax highlighting and many other fancy features.

The Terminal

The terminal is basically an interface to access the command line interface of the OS. Long ago they were physical devices - a monitor + keyboard - and connected to a mainframe using protocols like DEC VT100 or ANSI X3.64.

Today we usually use terminal emulators that emulate these devices - supporting those quite old protocols. It would be really great to have something much never protocol that could replace these old ones, and there are a few attempts from individuals or small projects, but I doubt we’d see much progress lacking support from big companies.


Let’s see some details about the most used (at least by default) shell, the Bourne Again Shell - bash.

Environment variables

You can set/read different environment variables, that will be available in all child process (program executed in the given context).

$ VAR=something
$ echo $VAR
$ unset VAR
$ echo $VAR             # prints nothing
$ export VAR=foobar     # makes it available for sub-shells
$ env                   # show all available environment variables

Basic scripting

There are basic control statements in shell scripts too.

The following script tries to convert all files (that are images) to a 400px wide variant called smaller_[filename]:

$ for f in *; do
    if [[ -f $f ]]; then
        convert -resize 400x smaller_$f

Command substitution

You can use the output of a command in an other command, like set it as a value of a variable, or print it:

$ FILELIST=$(ls)
$ echo $FILELIST
a.txt b.txt files_is_this_directory ...
$ echo `ls` # equivalent to $()
a.txt b.txt files_is_this_directory ...

Special variables

  • !! - the last command

  • !$ - the last parameter of the last command

  • $$ - current process id

  • $0, $1, $2, … - parameters (for a script)

  • $# - number of parameters

  • $* - all parameters (as one string)

  • $@ - all parameters (as list of strings)

  • $? - exit status of last command (0 - success, non-0 - something wrong)

Keyboard shortcuts

  • TAB - completion (if enabled)

  • ^C - break (SIGINT signal)

  • ^D - end-of-file (tells the current process that you finished typing something)

  • ^Z - suspend (SIGTSTP signal)

  • ^S - pause flow-control (XOFF) - output is not updated, terminal might seem to be frozen

  • ^Q - restart flow-control (XON) - useful if you accidentally pressed ^S :)

  • ^L - clear screen

  • ^A - go to the beginning of the line

  • ^E - go to the end of the line

  • ^W - delete previous word

  • ^U - delete whole line (useful if you mistyped you password and want to start over)

  • Alt+B - go back one word

  • Alt+F - go forward one word

  • ^P (or cursor up) - previous command

  • ^N (or cursor down) - next command

  • ^R - reverse search command history

These are the emacs mode shortcuts, which is the default. You can switch to vi mode using set -o vi, but you probably don’t want it ;)


You can perform simple operations at shell startup by creating the .bashrc file in your home directory. (There are other files that are executed at different stages of different shell startup, like .profile, .bash_profile, …)

Here you might:

Set some variables

export PATH=$PATH:~/bin
export PS1="Hello \t> "
export EDITOR=vim

Create aliases

alias ll=`ls -la`
alias code=`vim`


Every running process has three special file descriptors:

  • stdin - standard input
  • stdout - standard output
  • stderr - standard error

You can redirect these using <, >, 2> and |.

cmd < file            # use file as stdin
cmd > file            # redirects output into file (original file content will be lost)
cmd 2> file           # redirects error output into file
cmd >> file           # appends output of cmd at the end of file
cmd1 | cmd2           # use the stdout of cmd1 as the stdin of cmd2 (pipeline)
cmd < file.i > file.o # combine them az you want
cmd > file 2>&1       # redirects stdout and stderr into file
cmd1 < file1 | cmd2 | cmd3 > file2 2>&1 # ...

Some useful commands

man - manual

If you need some help about a command, you can try man [command] that will display the manual page of that command.


You often need to execute commands using root (administrator) privileges. This can be done via sudo (if you have the appropriate rights).

$ cat secure.txt
secure.txt: permission denied
$ sudo cat secure.txt
Password: [enter password]
This can be red only by root.

With su, you can log in as an other user - by default root. This can be useful if you need to execute many commands as root user.

$ su -
Password: [root password, which might not be set at all...]
# whoami
# exit
$ sudo su -
Password: [your password]
# whoami


To show (list) the files in a directory, you can use the ls command.

My most commonly used flags are:

  • l - long (owner, permissions, etc)
  • a - show hidden files (those that start with .)
  • t - sort by modification time
  • r - reverse order

So, show all files, with many information, in reverse order of modification (latest file at the bottom, probably the one you just modified/downloaded):

$ ls -ltra
-rw-r--r--  1 dyuri users    1106 Oct  3  2021 .gitconfig
drwxr-xr-x 51 dyuri users    4096 Mar 21 13:49 melo/

File permissions

In a posix system, file like objects have a standard set of permissions:

  • 3 levels: user (u), group (g), others (o)
  • 3 types: read (r), write (w), execute (x)

First character in the permission string:

  • - - normal file
  • d - directory
  • l - symoblic link
  • c - character device
  • b - block device


  • s - setuid/setgid (in place of x)
  • t - sticky bit (for /tmp and such, in place of x)

Setting permissions:

$ chown <user>[:<group>] <file> # set associated user/group
$ chmod <permission> <file>     # set permissions

# examples
$ chmod u+x file      # execute permission to owner
$ chmod a+w file      # write access to "all"
$ chmod o-r -R dir    # revoke read access from "others" in "dir" recursively
$ chmod 751 cica      # 751 is the octal representation of rwxr-x--x


Command for changing directory: cd

$ cd [dir]            # change directory to [dir]
$ cd $HOME            # go to your home directory
$ cd ~                # go to your home directory
$ cd                  # go to your home directory...
$ cd -                # change back to the previous directory

Creating/removing directories: mkdir/rmdir

$ mkdir dir           # create the directory "dir"
$ mkdir -p dir1/dir2  # create "dir1/dir2", "dir1" is also created if required
$ rmdir dir           # remove directory - only if empty (failsafe)

Other useful commands:

  • pwd - print working directory
  • pushd - push current directory to stack
  • popd - pop last directory from stack


Copy files: cp

$ cp file1 file2      # copy file1 to file2
$ cp -R dir1 dir2     # copy dir1 recursively into dir2

Move/rename: mv

$ mv file1 file2      # move file1 to file2

Delete file: rm

$ rm file             # remove file
$ rm -rf dir          # remove directory recursively (dangerous)

“Files” are “links” in the filesystem to entities on the disk. If such an entity has no links, it can be overwritten (the disk space reused). So technically we don’t remove a file, but only remove a link to it. You can use the unlink command to remove such links, but rm is much more user friendly.

Creating a new hard link to an existing file:

  • ln file1 file2
  • file1 and file2 will point to the same content on the disk, they have to be on the same device (partition)
  • changing anything in either one will be visible in the other
  • removing file1 or file2 will not delete the content from the disc
  • (even removing both of them won’t remove it, but you won’t be able to easily find that content, and it can be overwritten)

Creating symbolic links:

  • ln -s file1 file2
  • symbolic links are “special text files” pointing to the original file (not to the content on the disk)
  • they don’t have to be in the same partition
  • removing file2 (the link) won’t affect file1
  • removing file1 (the original file) will break file2

file - file info

Sometimes it’s hard to tell from a file, what it is (especially if the extension is missing). But there’s a tool for that, called file:

$ file /tmp
/tmp: directory
$ file .bashrc
.bashrc: ASCII text
$ file kep.png
kep.png: PNG image data, 732 x 571, 8-bit/color RGB, non-interlaced
$ file kep.png --mime-type
kep.png: image/png
$ mv kep.png kep.whatever
telkek.whatever: image/png

Disk usage

If the backend application just stopped unexpectedly, you should always check the disk space.

Disk usage per filesystems: df

$ df -h         # show free space - in human readable form
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda2       108G   80G   22G  79% /
/dev/sdb1       336G  202G  117G  64% /mnt/what
/dev/sdc1       110G  9.8G   94G  10% /home/dyuri/extra

To check the disk usage of files/directories: du

$ du -h               # show disk usage per file (recursively, human readable)
$ du -hs *            # show disk usage - summarized per directory
$ du -ks * | sort -n  # disk usage per directory, in kilobytes, sorted


Sometimes you need to check the content of a file without starting a bloated and slow editor. That’s what cat is good for.

$ cat file              # print the content of the file
$ cat file | more       # use "more" as pager (crazy shortcuts)
$ cat file | less       # use "less" as pager (better, vi-like shortcuts)
$ less file             # basically the same as above, without pipes

$ cat file | head -10   # show only the first 10 lines
$ cat file | tail -20   # show only the last 20 lines
$ cat file | head -12 | tail -2 # show only lines 11 & 12

$ tail -f file          # show the end of the file, but wait for changes (useful for logfiles)


Count the lines/words/characters in a file: wc

$ wc .bashrc
46  106 1730 .bashrc
$ wc -l .bashrc
46 .bashrc


Find string in the content of files: grep

# search for a css class
$ grep .grid-item *.css
global.css:	.search-recommender.fdCouponCar .grid-item.carLastItem {
newproducts.css:.newprod-grid .grid-view .grid-item-container {
newproducts.css:.newprod-dfgs .grid-view .grid-item-container {
search.css:.recipes-active .grid-item-container {

# you can use regular expressions
$ grep -e "\s.grid-view\s" *.css
newproducts.css:.newprod-grid .grid-view .grid-item-container {
newproducts.css:.newprod-dfgs .grid-view .grid-item-container {

# recursively (not posix, but most grep implementations support it)
$ grep -r -e "\s.grid-view\s"
common/product_grid.css:.ddpp .grid-view .grid-item-container {
common/product_grid.css:.newprod-featured .grid-view .grid-item-container {
newproducts.css:.newprod-grid .grid-view .grid-item-container {
newproducts.css:.newprod-dfgs .grid-view .grid-item-container {


Find files: find

# show all the files in the current directory recursively
$ find .

# show all css files
$ find . -name "*.css"

# search all css files for .grid-view class (the posix version of recursive grep)
$ find . -name "*.css" | xargs -- grep -e "\s.grid-view\s"


Stream editor to modify streamed content: sed

$ cat .gitconfig | grep red
  old = red bold
  whitespace = red reverse
$ cat .gitconfig | grep red | sed 's/red/green/'
  old = green bold
  whitespace = green bold

# replace a css class name with an other
$ find . -name="*.css" | xargs -- sed -i 's/\.cica(.*)/.kutya\1/'
# *ALWAYS* review such changes carefully!

awk is a more powerful beast, but it’s completely out of scope for this post

Compressing files

Command to bundle files together: the Tape ARchiver - tar

$ tar cfv cica.tar cica/      # create a tar with the content of "cica/" (no compression)
$ tar xfv cica.tar            # extract the files

$ tar cfvz cica.tgz cica/     # create a tgz with the content of "cica/" (gzip compression)
$ tar xfvz cica.tgz           # extract

$ gunzip cica.tgz             # decompress cica.tgz to cica.tar
$ gzip cica.tar               # compress cica.tar to cica.tar.gz

# compress individual files
$ echo "alma" > alma.txt
$ gzip alma.txt               # => alma.txt.gz
$ zcat alma.txt.gz            # show the content of a gzipped file

Gzip is very quick, network is typically slower, that’s why it is/was used to compress HTTP content. There are other, more effective compression algorithms that can be used with tar, like bzip or xz.


Command to check the running processes: ps

$ ps                          # show processes running in this shell
$ ps -ef                      # show all processes (BSD style, similar effect: `ps axu`)

$ pgrep java                  # pid(s) of java process(es)

$ pstree                      # the process tree

$ ls /proc                    # the /proc filesystem is an interface to the process data (in some operating systems)
$ cat /proc/[pid]/environ     # environment variables for the given process
$ ls -l /proc/[pid]/fd        # the open file descriptors of the process, 0 - stdin, 1 - stdout, 2 - stderr

$ top                         # show the running processes in an interactive way


You can suspend long running tasks and let them continue in the background. (But nowadays, starting a new terminal/tab is much easier. But if you accidentally pressed ^Z, this knowledge is still useful ;) )

$ sleep 10000                 # long running process
[1]+  Stopped                 sleep 10000
$ sleep 20000
[2]+ 139909 Stopped           sleep 20000
$ jobs -l                     # list jobs
[1]+ 139063 Stopped           sleep 10000
[2]+ 139909 Stopped           sleep 20000
$ bg %1 
[1]+ sleep 10000 &            # job 1 resumed in the background, prompt is not blocked
$ fg %2
sleep 20000                   # job 2 resumed in the foreground, prompt blocked
$ kill 139063
[1]+  Terminated              sleep 10000

The kill command does not necessary murder the process, it just sends a signal to it. If you want to terminate a process, you can kill <pid> it, and hopefully it will gracefully shut down (by handling the SIGTERM signal. If it does not stop, but you really want to get rid of it, you can use the SIGKILL (kill -9 <pid>) signal, that will terminate it by force, but that’s dangerous.

Host name lookup

To follow the NSS way:

$ getent hosts freshdirect.com  freshdirect.com
(root)$ echo " freshdirect.com" >> /etc/hosts
$ getent hosts freshdirect.com freshdirect.com

DNS lookups:

$ nslookup freshdirect.com

Non-authoritative answer:
Name:   freshdirect.com

$ nslookup freshdirect.com

Non-authoritative answer:
Name:   freshdirect.com


To connect to a remote host you can use the ssh command. You can copy files using scp.

$ ssh commonsoda@bubble.codeandsoda.hu -i .ssh/id_rsa   # log in to bubble using rsa key

# forward local 1234 port to port 8000 of bubble (which is protected by firewall)
$ ssh -L 1234:localhost:8000 bubble.codeandsoda.hu

# copy files over ssh
$ scp -r bubble.codeandsoda.hu:/var/log/nginx/ ./logs/

# better way to do it, copy only what's updated
$ rsync -avz bubble.codeandsoda.hu:/var/log/nginx/ ./logs/

Socket connection

You can use the telnet command to connect to a TCP port, and send data:

$ telnet icesus 80
Connected to icesus.
Escape character is '^]'.
GET / HTTP/1.1
Host: uxe.icesus.freshdirect.com

HTTP/1.1 200 OK
Server: nginx/1.23.4
Date: Thu, 08 Jun 2023 11:17:48 GMT
Content-Type: text/html

You can also create a “fake” server that listens on a specific port, for testing clients for example, using the nc (netcat) command:

$ nc -lp 1234

[in an other shell]
$ telnet localhost 1234

HTTP connection

There’s a lot of small tools to perform a HTTP request from the terminal. The most used/reliable one is curl, you can even copy requests from the browsers network inspector as curl commands.

$ curl http://localhost:3000/

Fancy new stuff

Here I list a lot of programs that are modern (but far from standard) replacement for the tools above.

  • ls -> exa
  • cat -> bat
  • cd -> z/zoxide
  • find -> fd
  • grep -> rg (ripgrep)
  • df -> duf
  • du -> dust
  • ps -> procs
  • top -> htop, gotop
  • nslookup -> dog, drill
  • curl -> xh, httpie

Other shells to try (replacing bash):

  • zsh
  • fish

Other fancy terminal stuff:

  • terminal multiplexers: zellij, tmux, screen
  • fuzzy finders: skim, fzf (for searching history, or files)