Useful GNU/Linux Commands for Developers — Part 1

13 min readAug 4, 2023

In this article, we will discuss GNU/Linux commands that are convenient for developers to solve everyday tasks. This compilation is aimed at those who don’t want to delve deep into the system but occasionally find themselves working on Linux servers. Here, we have gathered the tools we use and are accustomed to, though it’s worth noting that most of the mentioned tasks can be accomplished using alternative methods.

The article is inspired by an IT meetup where we exchanged useful life hacks. We divided all the commands we discussed into two parts. In the first part, we’ll cover commands related to obtaining general information about users and the system, file management, processes, and text manipulation. The second part will focus on bash and networking, particularly ssh.

XKCD: 1168

Viewing System and User Information

We have categorized the useful commands into groups. Let’s start with viewing information about users and the operating system.


The first command is to display user information:

id <user>

If you enter it without parameters, you will get information about the user you are currently logged in as, as well as a list of groups you belong to. Sometimes, it is helpful to check this list of groups to determine if you have the necessary privileges, such as executing Docker commands.

To view various system information (users, uptime, etc.), there are three commands:


The last command is essentially a compilation of the first two. It allows you to see who has logged into the system from which IP address, how long the system has been running, and the average load. This information is often useful, especially when dealing with servers. For instance, if a server goes offline and then suddenly reappears on the network, using this command can help determine if it was rebooted.

Another command that provides reboot information, as well as the latest logins to the system, including reboots, is:


If you need data for a specific user, you can pass their username as a parameter to this command. This is useful if there is a suspicion that someone might have logged into the system without authorization.

Memory and Processor

You can obtain information about system memory usage using several commands. The first one we’d like to explain is:


Without any parameters, it displays the current memory utilization in the system, including the total memory, SWAP size, used and free space, buffers, etc. Using the `-h` parameter, you can get the same information in a human-readable format, i.e., in megabytes and gigabytes. With the `-m` parameter, the data is shown in megabytes.

Another command to gather memory information is:


Its output is dynamic, allowing you to track memory consumption in real-time, with information refreshing every second. You can use shortcuts to sort the output:

  • Shift+m — sort by memory usage.
  • Shift+p — sort by CPU usage.
  • 1 — display load per CPU core; this can be overwhelming on modern servers with 48 or more virtual cores, but it’s handy on smaller virtual machines, especially when dealing with significant network traffic to understand how the load is distributed across cores.

To exit the utility, use `q`.

An enhanced version of `top` is available:


It dynamically visualizes core utilization in color and offers numerous settings and dynamic filters for more detailed information. Refer to the man page to learn how to use these features. You can exit `htop` with `q` or `F10`.

There’s also a more advanced set of tools:


This is one of the most convenient debugging and tracing tools in Linux. These utilities allow you to view the system’s load with more granularity. You can delve into each specific process, see the functions it calls, and more.

Our developers most often use the utility that displays resource consumption:

perf top

The output provides a breakdown of processes, functions, and currently loaded libraries. For example, you can see which functions ClickHouse is currently executing or what Java is doing. By moving the cursor, you can pause the dynamic updates and carefully examine what is happening.

With a specific parameter, the command provides more summarized information:

perf top — sort comm,dso

This set of tools allows you to visualize stack traces. With specific parameters, it initiates tracing and profiling, effectively acting as a profiler, making it an incredibly useful tool.


For disk information, one of the most popular commands is:


It allows you to see which process is currently consuming disk resources (read and write speeds).

There is also the `iostat` utility, which displays the load on specific block devices, showing where writes are going, in what volume, and the percentage of load and utilization. This utility has a considerable number of parameters. It is often used with the following parameter:

iostat -xk <sec>

The parameter specifies the frequency of information updates. This utility is useful for understanding which of the disks is heavily loaded.

Working with Processes

Earlier, we discussed how to view some information about running processes. Let’s explore this topic in more detail.

I believe everyone is familiar with the `ps` command. The most popular combination of parameters (the dash can be included or omitted) is:

ps -axu

This command displays a list of processes, including information about who started each of them. The output can be quite extensive.

Sometimes, you may need to see the parent process. The following command allows you to view the process tree:


In this format, the command provides a minimal amount of data. However, using one of its many parameters, you can display the process ID and other additional information.


To send signals to processes, you can use the `kill` command. Despite its name, it is not just for “killing” processes. In its simplest form, without specifying a signal number, the command gracefully stops the process. This means that a signal to stop is sent to the process, it flushes all the data it needs, clears memory, and exits:

kill <PID>

If you pass the signal `-9`, the process will be killed without waiting for it to finish writing:

kill -9 <PID>

This can be useful in emergency situations when the system is overloaded and you need to forcefully terminate a process that is interfering with its normal operation.

Sending signal `-1` allows you to perform a `Systemctl reload` — a service reload without stopping it. This reopens log files, re-reads configurations, and so on. In the case of Nginx, all connections will be closed, processes will be stopped, and new ones will be started with the new configuration.

When you suddenly have a bunch of identical processes spawned, this command comes in handy:

killall <CMD>

You can pass the name of `cmd` from the output of the `ps` command as a parameter to `killall`.


Linux allows running processes with different priorities and changing them dynamically. The priority is a number ranging from -20 to 19. Lower values indicate higher priority. By default, all processes start with priority 0. Kernel processes, which are functions that need to be processed first, have the highest priority of -20. The `top` command output includes the “ni” column (short for “nice”), which displays the process priority.

You can launch a process with a specific priority using the `nice` command:

nice -n -20..19 <CMD>

To change the priority “on the fly,” you can use:

renice -20..19 <PID>

A similar command exists for I/O operations, specifically for disk writes:

ionice -c<1..3> -n<0..7> <CMD>

The first parameter is the scheduling class (1 — real-time, 2 — best effort, 3 — idle), and the second parameter is the priority within the class. If you don’t want a process, such as a backup, to interfere with the main system’s operations, you can assign it the scheduling class 3 using this command. In this case, disk writes will only occur when the system is idle and the disk is not under load.

Background Execution

In Linux, there are over 10 ways to run commands in the background. Three of the most popular commands that are mentioned in most courses and books are:


We won’t go into detail about them here, as we mention them out of respect for their popularity. We recommend looking at the man pages. Running with `nohup` allows you to write to a file, but it can be inconvenient if the program expects input parameters (working with text files can be more complicated). Therefore, our developers often use the `screen` command.

`screen` is essentially a text screen manager. The command allows you to run processes in the background, with separate output that doesn’t clutter the terminal screen.

Screen has many functions. For example, using the Ctrl+a,d combination allows you to detach from the console session, leaving the process running in the background. By the way, if you don’t need multiple screens and just want to switch to the background, you can use the simple command:


It runs the process in the background and detaches its output from the console. You can return to it using the same command.

We recommend studying the man for `screen` and `detach`.

There are more modern alternatives to `screen` — `mux/dtvm`. Interestingly, `dtvm` doesn’t have a detach feature out of the box, so it’s used in combination with the `dtach` utility.

Timings and Schedule

The last thing worth discussing in the context of processes:

To evaluate the execution time of a command, you can use the following command:

time <CMD>

This command allows you to measure the execution time of a command. For example, you can run:

time curl

and find out how long the request took. In the second part, we will also discuss an advanced variant when you need to inspect the connection in detail, identifying at which stages time is being lost.

Similarly, you can determine how much time was taken for a backup to complete. You can run the command with timing, go about your tasks, and then return to see how much time it took.

For periodic execution of commands, you can use:


This can be helpful when downloading large files. In one terminal, you can initiate the download, and in another, you can use:

watch ls -h

to track the file’s growth and estimate when the download will complete.

Working with Files and Directories


Let’s start with the command to display the current directory:


It is highly recommended for developers to always run this command before passing a dot (current directory) as a parameter to any other command. This helps to avoid accidentally executing commands in the wrong location.

You can create a directory using the command:

mkdir -p /a/b/c/d

If you have the “a” directory created, but “b”, “c”, and “d” don’t exist, they will be created one after another. This saves you from having to use `mkdir` for each level.


To display a list of files and directories (list), you can use the command:

ls -l

The `-l` parameter provides a long output. In some Linux distributions, there is a ready-made alias for it — `ll` (we will discuss aliases in more detail in the second part). The `-h` parameter displays the file sizes in human-readable format. Hidden files can be displayed using `-la`. When using the `-n` parameter, user names are replaced by their user IDs. This can be helpful when you need to see the ownership of files connected to a container (where user IDs are more relevant).

To search for a file, library, or man page, you can use the command:

whereis <CMD> or <filename>

For example:

whereis vi

will show that `vi` is located in the directory `/usr/bin/vi`. This command is useful when developing scripts that run outside of root. Ordinary users usually don’t have access to all directories, so in scripts, you have to specify full paths. Using the `whereis` command, you can obtain these paths.

A similar command shows the path to the executable file that will be executed after entering the argument:


For example, for `ls` (which is an alias in bash, we’ll talk more about this in the next part), it will be:

alias ls=’ls — color=auto’

Another command that helps find files based on names and various other criteria (owner, creation time, modification time, access time, etc.) is `find`. For example:

find -size +1000k -name “test”

This is a very useful tool. We could talk about it extensively, but we won’t go into detail here — we recommend checking the man pages.

The `find` command also has an `exec` parameter, which allows you to find files and perform the same actions on them.

Disk Space

To assess free disk space and the space occupied by a directory, you can use the `disk free` command:

df -h

Without parameters, it provides information in blocks. The block size may vary depending on the operating system version. In older Linux systems, blocks were 512 bytes, but on file systems like GPFS and similar, running `df` without parameters might behave oddly.

The `-h` parameter provides human-readable output. The `-i` parameter shows the number of used inodes. Sometimes, even if you have a lot of disk space available, an application might not be able to write to it because the inodes are exhausted (inodes are essentially metadata of the file system). If this happens, you might need to come up with a solution. For many file systems, the number of inodes is set when it’s created, so you may have to recreate the file system.

The disk usage utility is your first companion when you need to find out where your disk space is going:

du .

In this form, the command provides the size of all directories within the current one. The `-h` parameter shows the list of directories and their sizes. For `du`, it’s recommended to check the man page — this utility has many useful parameters, including maximum depth of recursion and more.

For example, the following command allows you to view the size of all directories inside `/var`, without going further inside them:

du -h — max-depth 1 /var

By the way, for each command in GNU/Linux, such as `max-depth`, there are both long and short versions of the parameter. Using two dashes before the parameter name indicates the long version, and one dash indicates the short version. You can use either option, whichever is more convenient to remember.

Sometimes, during application testing, working files that are currently being written to might be deleted. In this case, disk space won’t be freed. To find files marked as deleted, you can use the `lsof` command:


It allows you to see which processes in the system have files open. This can be useful when you need to unmount a partition but the system doesn’t allow it.

To create zero-length files, you can use the `touch` command. Originally, the `touch` command was created to change the last-access time of files, but it can also change the creation time. The way it works is, if the file doesn’t exist, it will be created. Often, this is the primary use case, especially when dealing with services that require the presence of a flag (i.e., a zero-length file) with a specific name in a certain directory.


To archive files and directories, you can use the `tar` command:

tar xzf file.tgz

Originally, `tar` (Tape Archiver) was created for tape storage, but now it is used much more widely. By the way, it can work without compression too. With `tar`, you can, for example, pack the `/etc` directory before performing some destructive actions on a server or workstation.

The most commonly used parameters:

  • c — create an archive.
  • z — use gzip for compression (on modern processors, bzip2 is better as it compresses more efficiently, but it is more resource-intensive; to use it, replace `z` with `j`).
  • x — extract an archive.

For more details about `tar`, it’s recommended to check the man page.

Another archiver is `pigz`. It is the same as `gzip` but allows compression and decompression using all available cores, which can make the process faster.

Text Editor

Let’s start with the universal Swiss Army knife for working with text files — `grep`. This command can do many useful things, such as working with different types of regular expressions (configurable from command-line parameters). We often use the following keys:

  • r — recursive search, starting from the specified directory.
  • v — invert the search, i.e., it will display all lines that do not contain the specified pattern. This parameter is often used when parsing logs — you can collect all errors and then filter out 502s from there.
  • `A NUM` and `B NUM` — allow displaying all lines before or after the found match, respectively (A = after, B = before). You can specify the number of lines for these parameters — `NUM`.
  • `C NUM` — works like both of the previous parameters together (both before and after).

To output the entire content of a text file to the console, you can use the `cat` command. The text remains on the screen.

There are other commands that essentially do the same thing but in slightly different ways:

  • `more`: This displays the file page by page (with pagination). Like `cat`, the text remains on the screen, and you can scroll through it.
  • `less`: This provides a more advanced version with scrolling, searching, and more. For example, `less` is used when viewing the `man` page. It allows you to scroll up and down and search for specific text. The syntax is similar to `vi` — you can type a slash to search for the desired text, and all matches will be displayed. Pressing `n` shows the next match.

To display the first and, respectively, last lines of text files, use the `head` and `tail` commands. By default (without parameters), `head` displays the first 10 lines and `tail` displays the last 10 lines. `tail` has the `-f` parameter, which is useful for monitoring logs. It will continuously display what is being appended to the file until you stop it with `Ctrl+C`.

The `diff` utility is used to find differences between two files. The differences found can be saved and applied using the `patch` command. In the days before GitHub, developers exchanged changes through patches sent by email. Nowadays, these commands can be useful when you need to transfer 10 changes to different places.

To count the number of lines in a file, use the `wc -l` command. This command is often used on the output redirected from other commands. For example, you can use `grep` on a log file and redirect its output to the `wc -l` command to count how many times a specific event occurred.

For more in-depth work with text, there are editors like `vi` and `vim`. You can talk about them for a very long time, so I recommend referring to a cheat sheet like this one: []( It’s convenient to print it out and keep it in front of you.

For those who haven’t mastered the magic of `vim`, there’s a text editor called `nano`. It often appears in trimmed-down distributions on routers and other devices (unlike `vi`, which has grown significantly in size). Nano provides hints for basic actions and is convenient when you need to quickly edit a file. Some developers use only Nano because, in their words, “Vi only knows how to scream and mess up the text.”

That’s all for now. In the next part, we’ll talk about bash commands, networking, and SSH.

Thanks to Igor Ivanov, Anton Dmitrievsky, Denis Palaguta, Alexander Metelkin, and Nikolay Eremin (Maxilect) for compiling this selection.




We are building IT-solutions for the Adtech and Fintech industries. Our clients are SMBs across the Globe (including USA, EU, Australia).