Skip to main content

6 Lesser-Known Linux Commands You Should Try

 Linux life isn’t all about ls and grep. Sure, you’ve probably used those tools to quickly find things and solve simple problems, but that’s only the beginning. Most Linux distributions have a plethora of tools built-in that are easy to miss at first glance. Under the surface Linux has some of the most specific, concise programs to accomplish everything from basic text manipulation to complex network traffic engineering.

If you spend time Googling tutorials or guides on mastering Linux you will be presented with some great material that covers the basics. Learning the foundational knowledge of how to navigate on the command-line using cd and ls is a must, but there is so much more you can accomplish without ever reaching for another third-party tool or language.

Engineers can be too quick to jump to a high-level programming language when they think something can’t be accomplished through focused programs and pipes. Sure, in most cases switching to a language like Python may be simpler and faster but there is something to be said for achieving the same result without it. You cut out a massive dependency, the programming language, and immediately gain a wider range of compatibility. You may not be able to guarantee a particular language version is available across different systems you interact with. Furthermore, you might also be limited in what you can install on these system . Learning to work with the native OS tools you’ve got is a sharp skill that will serve you well.

1. tc

Traffic Control. This is a suite of tools for manipulating network traffic inside Linux. The things you can accomplish with tc are both impressive and nauseating. This is not for the faint of heart and configuring different traffic manipulations is by no means simple, but learn to understand it and you’ll be able to harness the power of traffic engineering right inside Linux.

The man page isn’t exactly user friendly, but not to worry because there is an excellent breakdown with a few of the ways tc can be used available from the Debian Wiki.

A common example for tc use is applying some packet delay to a network connection. With tc you can manipulate incoming and outgoing packets to apply things like delay or even drop a certain number of them entirely. Let’s take a look at a relatively simple example where we apply delay to our own network connection. First let’s see what our pings to Google look like:

pi@raspberry:~ $ ping 8.8.8.8

PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.

64 bytes from 8.8.8.8: icmp_seq=1 ttl=117 time=13.6 ms

64 bytes from 8.8.8.8: icmp_seq=2 ttl=117 time=10.9 ms

64 bytes from 8.8.8.8: icmp_seq=3 ttl=117 time=15.5 ms

64 bytes from 8.8.8.8: icmp_seq=4 ttl=117 time=13.8 ms

Not too shabby. We’ve got a nice ~13.5ms of delay between us and Google. What if we wanted to test how an application would perform with even more delay? Stress testing applications by inducing poor network conditions is an extremely common and important practice. If you don’t know how your app will perform under sub-optimal network conditions, then you don’t really know how it’s going to perform for everyone.

Let’s induce 100ms of delay with tc:

sudo tc qdisc add dev eth0 root netem delay 100ms

pi@raspberry:~ $ ping 8.8.8.8

PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.

64 bytes from 8.8.8.8: icmp_seq=1 ttl=117 time=110 ms

64 bytes from 8.8.8.8: icmp_seq=2 ttl=117 time=116 ms

64 bytes from 8.8.8.8: icmp_seq=3 ttl=117 time=119 ms

64 bytes from 8.8.8.8: icmp_seq=4 ttl=117 time=113 ms

Awesome! Now we can see our 100ms of delay on top of our existing delay to Google. Don’t forget to clear the impairment after you’re done testing:

sudo tc qdisc del dev eth0 root

2. whiptail

Image for post

Classic whiptail message box in terminal.

Have you always wondered how those pretty terminal pop-up messages were generated during installations? With whiptail of course! This is a handy single-purpose utility for displaying dialog boxes right inside of the terminal. You may have noticed this same style used during the Ubuntu installation and other popular command-line driven installs.

Whiptail is widely available and comes with most distributions for quick and easy use. This utility has a wide range of different displays and inputs for you to choose from:

Message boxes

Text input boxes

Password input boxes

Yes or no choices

Checklists

… and more!

Let’s try displaying a simple yes or no input box on the command-line with whiptail:

whiptail --yesno "would you like to continue?" 10 40

Using whiptail with the --yesno option is incredibly simple and straightforward. You pass the type of display you want, the message and then the size of the box to draw on the screen. Your output should look similar to this:

Image for post

Yes or no box using whiptail.

In order to see the return value of clicking yes or no, you can echo the result of the last run command in the console. If you simply type echo $? then you’ll either see a 0 for ‘yes’ or a 1 for ‘no’. This can be easily incorporated into a shell script using an example like the one below:

#!/bin/bash

whiptail --yesno "would you like to continue?" 10 40

RESULT=$?

if [ $RESULT = 0 ]; then

  echo "you clicked yes"

else

  echo "you clicked no"

fi

3. shred

When was the last time you deleted a file on Linux? How did you do it? Did you use rm and then forget about it? If there was any sensitive data in that file, you may want to think twice about using rm for that sort of thing. This is where shred comes in. This little utility will actually erase a file securely by writing random data over top of the file multiple times.

By using rm to delete a file, you’re really only removing the “link” or reference to the file that the OS knows about. Sure, the file disappears and you can’t see it anymore but the raw data still exists on the hard drive of the system for a period of time. It is possible to recover such data through some careful forensic processes. Using shred you can rest assured that data will be deleted as much as possible (without incinerating the computer, of course).

Check out the wiki on shred for even more details on how it works.

The next time you want to be certain a file has been safely deleted, run the following command (the -u flag removes the actual file, not just overwrites it):

shred -u <file>

4. split

Simple, effective and even more flexible than your high-level programming language’s command by a similar name. Split a file by any number of different characteristics from the number of lines to length in bytes. With split you get more flexibility than just being able to split strings or line breaks.

Let’s check out how we could split up a file that contains four lines. Say we wanted to break our file up after a certain number of lines; 2 in this case. We’ll use echo to create our test file and then split will handle the rest:

echo -e "line1\nline2\nline3\nline4" > test_file

split --lines 2 ./test_file test_file_split_

cat test_file_split_aa && cat test_file_split_ab

In this case, we’ve produced two new files from our original input file. The split command allows you to apply a prefix name to the newly created files which is what we’ve done with the last argument to the command. The newly split files contain a suffix of aa and ab to keep things straight.

There are a ton of possibilities for split uses. You could break up large log files when they reach a certain size or line length. You could also use split to separate concerns in text files by splitting on some predefined delimiter to keep things nice and organized.

5. nl

Ever looked at a log file our some other plain text output and thought:

“Wouldn’t this be great with line numbers?”

Line numbering makes things easier to read and much simpler to remember your place or point out a specific section. In keeping with “the Linux way” of doing things, there is a dedicated utility for just this sort of thing. Using nl you can quite literally number lines. Accepting some text on stdin produces the same result, but with line numbers. Check it out:

echo -e "one\ntwo\nthree"

one

two

three

echo -e "one\ntwo\nthree" | nl

     1 one

     2 two

     3 three

You can even make some small tweaks to the numbering margin and separator if you prefer a different format:

echo -e "one\ntwo\nthree" | nl -s ": " -w 1

1: one

2: two

3: three

It is easy to see how helpful this could be with larger files containing hundreds or even thousands of lines. The next time you need some line numbers superimposed just pipe to nl:

cat <file> | nl

If you’re fond of using less to view large files you can also simply pass the -N argument when opening the file to automatically have line numbers available. This skips the overhead of actually having to manipulate the raw file and apply line numbers since less does not load the entire file at once.

6. flock

Locks. Love them or hate them, at some point you will have to deal with locks. The concept of locking is fairly simple. If you need to perform some operation on some state that other processes might have access to then your operation should “block” all other actions until it is complete. In certain cases this is handled automatically, in others you have to establish a simple system of locks to ensure race conditions do not present themselves.

Using flock you can generate different types of locks that can be obtained during concurrent operations. The lock itself is really just a file in Linux. Let’s take a look at how we might use a lock to prevent multiple processes from interacting with a file:

LOCKFILE=/tmp/lockfile

already_locked() {

  echo "lock is already held, exiting"

  exit 1

}

exec 200>$LOCKFILE

flock -n 200 || already_locked 

echo "lock obtained, proceeding"

sleep 10

echo "releasing lock, done"

If you run this shell script it will attempt to obtain a lock for the file /tmp/lockfile by assigning file descriptor 200 to it and then utilizing a “non-blocking” type of lock. When you use this type of locking style, if the lock has already been obtained then all other attempts to obtain it will fail instead of waiting for it.

Try running the script (which will sleep for 10 seconds) in one window and then in another window, try running a second instance of it. You’ll notice the first run obtains the lock and the second one fails because the lock was already obtained. With this example you could replace the simple sleep command with a set of long-running data processing commands or file updates you want to protect.

Thanks for reading! Under the surface of Linux hides a sea of amazing utilities at your disposal. Learning more about how you can harness the power of these native programs is fun and forces you to think critically about each individual program’s purpose and effectiveness. The next time you want to explore what’s available, simply ls /usr/bin and start explorin

Comments

Popular posts from this blog

How to use Ngx-Charts in Angular ?

Charts helps us to visualize large amount of data in an easy to understand and interactive way. This helps businesses to grow more by taking important decisions from the data. For example, e-commerce can have charts or reports for product sales, with various categories like product type, year, etc. In angular, we have various charting libraries to create charts.  Ngx-charts  is one of them. Check out the list of  best angular chart libraries .  In this article, we will see data visualization with ngx-charts and how to use ngx-charts in angular application ? We will see, How to install ngx-charts in angular ? Create a vertical bar chart Create a pie chart, advanced pie chart and pie chart grid Introduction ngx-charts  is an open-source and declarative charting framework for angular2+. It is maintained by  Swimlane . It is using Angular to render and animate the SVG elements with all of its binding and speed goodness and uses d3 for the excellent math functions, scales, axis and shape ge

JavaScript new features in ES2019(ES10)

The 2019 edition of the ECMAScript specification has many new features. Among them, I will summarize the ones that seem most useful to me. First, you can run these examples in  node.js ≥12 . To Install Node.js 12 on Ubuntu-Debian-Mint you can do the following: $sudo apt update $sudo apt -y upgrade $sudo apt update $sudo apt -y install curl dirmngr apt-transport-https lsb-release ca-certificates $curl -sL https://deb.nodesource.com/setup_12.x | sudo -E bash - $sudo apt -y install nodejs Or, in  Chrome Version ≥72,  you can try those features in the developer console(Alt +j). Array.prototype.flat && Array.prototype. flatMap The  flat()  method creates a new array with all sub-array elements concatenated into it recursively up to the specified depth. let array1 = ['a','b', [1, 2, 3]]; let array2 = array1.flat(); //['a', 'b', 1, 2, 3] We should also note that the method excludes gaps or empty elements in the array: let array1

Understand Angular’s forRoot and forChild

  forRoot   /   forChild   is a pattern for singleton services that most of us know from routing. Routing is actually the main use case for it and as it is not commonly used outside of it, I wouldn’t be surprised if most Angular developers haven’t given it a second thought. However, as the official Angular documentation puts it: “Understanding how  forRoot()  works to make sure a service is a singleton will inform your development at a deeper level.” So let’s go. Providers & Injectors Angular comes with a dependency injection (DI) mechanism. When a component depends on a service, you don’t manually create an instance of the service. You  inject  the service and the dependency injection system takes care of providing an instance. import { Component, OnInit } from '@angular/core'; import { TestService } from 'src/app/services/test.service'; @Component({ selector: 'app-test', templateUrl: './test.component.html', styleUrls: ['./test.component