Calculation of the sum of each third column from many files

I have many files with three columns in a form of: file1 | file2 1 0 1 | 1 0 2 2 3 3 | 2 3 7 3 6 2 | 3 6 0 4 1 0 | 4 1 3 5 2 4 | 5 2 1 First two columns are the same in each file. I want to calculate a sum of 3 columns from every file to receive some

Extract values ​​from a fixed-width column

I have text file named file that contains the following: Australia AU 10 New Zealand NZ 1 ... If I use the following command to extract the country names from the first column: awk '{print $1}' file I get the following: Australia New ... Only the fir

Add an extra line to a text file after each N lines

Hi I have a Unix command that produces a list of ip addresses along with other columns information . i want to add something to the command so that it displays it as a set of 3 lines then a space or ---- and then the next 3 lines and so on. how can I

Remove CRLF from the middle of a file

I have been receiving a text file where each row should be 246 columns in length. For some reason an errant CRLF is being inserted in the file after every 23,036 characters, causing all sorts of problems. The file is in a windows format, all line end

Write two exit loops in two columns in the shell?

How can I write the following in an efficient way: for j in {1..339};do for i in {1..427};do echo -e $j'\t'$i >> ofile.txt done done here is one alternative without explicit loops join -j9 -t$'\t' <(seq 339) <(seq 427) | cut -f2- > ofile.tx

How can I fix a string in a shell script (remove

I am webscraping a cablemodem's HTML diagnostics page source code using shell script and I need to fix some coding errors that were done by Motorola. There are a few occurrences in many pages that are missing the closing > at the end of an input tag

Rearrange the column with empty values ​​using awk or sed

i want to rearrange the columns of a txt file, but there are empty values, which cause a problem. For example: testfile: Name ID Count Date Other A 1 10 513 x 6 15 312 x 3 18 314 x B 19 31 942 x 8 29 722 x when i tried $ more testfile |awk '{print $2

AWK: convert a string to a column

I want to convert string (eg: abcdef) to a column This is what I want. a b c d e f I know how to covert string to column by using sed $ echo abcdef | sed 's/[^.]/&\n/g'|sed '$d' But how to covert it using awk?You can set the field separator to an emp

Filtering text output on Linux

I get output from Junos Switches in such format: Physical interface: ge-0/0/7, Enabled, Physical link is Up Queue counters: Queued packets Transmitted packets Dropped packets 0 N4M-Q1 0 42210774942 1163342 I need only the interface name and dropped p

How to do calculations on the lines of a file in awk

I've got a file that looks like this: 88.3055 45.1482 37.7202 37.4035 53.777 What I have to do is isolate the value from the first line and divide it by the values of the other lines (it's a speedup calculation). I thought of maybe storing the first

How to move some lines to the top of the file?

I have a file like this: aaa.bbb.1.ccc xxx.bbb.21 mmm.ppp xxx.eee mmm.qqqq xxx.hhh.12.ddd I want to move all the lines starting with xxx. at the top of the file with a simple command (using sed, awk, grep...). So my new file will look like this: xxx.

Replace the string in multiple columns in a csv

Consider the sample below: BRANCH|TXN_ID|CUSTOMER|PROCESS_DATE|VALUE_DATE|AMOUNT|UPLOAD_DATE|NARRATIVE 1|23234|12343|20141030|20141030|2000|20141030|TEST 1|23234|12343|20141030|20141030|2000|20141030|TEST 1|23234|12343|20141030|20141030|2000|20141030

Match the URL template in the file using SED, AWK, or GREP

I am trying to use grep to extract a list of urls beginning with http and ending with jpg. grep -o 'picturesite.com/wp-content/uploads/.......' filename The code above is how far I've gotten. I then need to pass these file names to curl title : "Fami

awk swap the first two fields gives a weird bug

I'm playing with awk and I'm trying to swap the first two fields of a file, like so: awk -F : '/cjares/{temp=$1; $1=$2; $2=temp; print}' /etc/passwd However, the output is not right. These are the two outputs, one without swapping the fields, the sec

How to get the number of unique characters per line with awk?

I have a text file that looks like this: A A A G A A A A A A A A G A G A G G A G G G G G G A A A A A T C T C C C A A A G A A C C C C C C T G G G G G T T T T T T I want to count the number of occurrences of each letter by row. There is a fair bit of d

Search and copy a string in the HTML code

I'm trying something new, I would normally do this in C# or VB. But for speed reason I'd like to do this on my server. Open File terms.txt Take each item one at a time from terms.txt and open a url (possibly curl or something else) and go to http://s