I have many files with three columns in a form of: file1 | file2 1 0 1 | 1 0 2 2 3 3 | 2 3 7 3 6 2 | 3 6 0 4 1 0 | 4 1 3 5 2 4 | 5 2 1 First two columns are the same in each file. I want to calculate a sum of 3 columns from every file to receive some
The left part data (in variable) like : echo "$lpart" "2017-07-03 13:39:5", "-39dB", "7c:e9:d3:f1:61:55" "2017-07-03 13:39:5", "-39dB", "7c:e9:d3:f1:61:55" "2017-07-03 13:39:5"
I would like to get the values of the name fields of the following text, using sed, awk, grep or similar. { "cast": [ { "character": "", "credit_id": "52532e3119c29579400012b5", "gender": null, &
I have text file named file that contains the following: Australia AU 10 New Zealand NZ 1 ... If I use the following command to extract the country names from the first column: awk '{print $1}' file I get the following: Australia New ... Only the fir
if ($6 = "sum") {next}; Ok, that does not work. I need a way to skip a line if any field (here $6) contains the string "sum". Important is here skip and not exit so awk continues parsing the following lines. if ($6 == "Summe"
Hi I have a Unix command that produces a list of ip addresses along with other columns information . i want to add something to the command so that it displays it as a set of 3 lines then a space or ---- and then the next 3 lines and so on. how can I
I have been receiving a text file where each row should be 246 columns in length. For some reason an errant CRLF is being inserted in the file after every 23,036 characters, causing all sorts of problems. The file is in a windows format, all line end
I have an input file in .csv format which contains entries of tax invoices separated by comma. for example: Header--TIN | NAME | INV NO | DATE | NET | TAX | OTHERS | TOTAL Record1-29001234768 | A S Spares | AB012 | 23/07/2016 | 5600 | 200 | 10 | 5810
Hi I want to remove last comma from a line. For example: Input: This,is,a,test Desired Output: This,is,a test I am able to remove last comma if its also the last character of the string using below command: (However this is not I want) echo "This,is,
The opposite of this question Comment out line, only if previous line contains matching string ... I'd like to use sed or awk to comment out the line containing if but ONLY if the following line contains specific. In this example: ... if [ $V1 -gt 10
How can I write the following in an efficient way: for j in {1..339};do for i in {1..427};do echo -e $j'\t'$i >> ofile.txt done done here is one alternative without explicit loops join -j9 -t$'\t' <(seq 339) <(seq 427) | cut -f2- > ofile.tx
I have this huge table with ~200k lines and columns (tab separated). I'd like to pick them according to the value of this particular column $4 so their values are spaced for at least 100, but also considering the value on column $3. i.e id tag xxx po
I am webscraping a cablemodem's HTML diagnostics page source code using shell script and I need to fix some coding errors that were done by Motorola. There are a few occurrences in many pages that are missing the closing > at the end of an input tag
somethingsame,somethingsame_usage,2015-11-30 01:00:00,0 somethingsame,somethingsame_usage,2015-11-30 02:00:00,0 somethingsame,somethingsame_usage,2015-11-30 03:00:00,0 somethingelse,somethingelse_usage,2015-11-30 01:00:00,0 somethingelse,somethingels
i want to rearrange the columns of a txt file, but there are empty values, which cause a problem. For example: testfile: Name ID Count Date Other A 1 10 513 x 6 15 312 x 3 18 314 x B 19 31 942 x 8 29 722 x when i tried $ more testfile |awk '{print $2
I want to convert string (eg: abcdef) to a column This is what I want. a b c d e f I know how to covert string to column by using sed $ echo abcdef | sed 's/[^.]/&\n/g'|sed '$d' But how to covert it using awk?You can set the field separator to an emp
I get output from Junos Switches in such format: Physical interface: ge-0/0/7, Enabled, Physical link is Up Queue counters: Queued packets Transmitted packets Dropped packets 0 N4M-Q1 0 42210774942 1163342 I need only the interface name and dropped p
I've got a file that looks like this: 88.3055 45.1482 37.7202 37.4035 53.777 What I have to do is isolate the value from the first line and divide it by the values of the other lines (it's a speedup calculation). I thought of maybe storing the first
I have files with the format given below. Please note that the entries are space seperated. 16402 8 3858 3877 3098 3099 3858 -9.0743538e+01 1.5161710e+02 -5.4964638e+00 3244 -9.7903877e+01 1.8551400e-13 1.0194137e+01 3877 -9.2467590e+01 1.5160857e+02
I have a file like this: aaa.bbb.1.ccc xxx.bbb.21 mmm.ppp xxx.eee mmm.qqqq xxx.hhh.12.ddd I want to move all the lines starting with xxx. at the top of the file with a simple command (using sed, awk, grep...). So my new file will look like this: xxx.
Consider the sample below: BRANCH|TXN_ID|CUSTOMER|PROCESS_DATE|VALUE_DATE|AMOUNT|UPLOAD_DATE|NARRATIVE 1|23234|12343|20141030|20141030|2000|20141030|TEST 1|23234|12343|20141030|20141030|2000|20141030|TEST 1|23234|12343|20141030|20141030|2000|20141030
I have a fasta file_imagine as a txt file in which even lines are sequences of characters and odd lines are sequence id's_ I would like to search for a string in sequences and get the position for matching substrings as well as their ids. Example: In
HI am trying to convert a text file to html with table so that i could mail the output in a table format and i used awk 'BEGIN{print "Content-Type: text/html; charset="us-ascii""\n "<html>"\n "<Body>"\n
I am trying to use grep to extract a list of urls beginning with http and ending with jpg. grep -o 'picturesite.com/wp-content/uploads/.......' filename The code above is how far I've gotten. I then need to pass these file names to curl title : "Fami
I want to write an efficient awk script that will take a file similar to the excerpt shown below and print a certain line (for instance, the line beginning with "Time (UTC):") from each matching record. I believe there's a better way than what I
I have a file which contains lines of data in the following format: a11 a12 a13 a14 a15 a21 a22 a23 a24 a25 a31 a32 a33 a34 a35 a41 a42 a43 a44 a45 . . . what I need is to save this data in a new file with the following format after performing some a
Al my html files reside here : /home/thinkcode/myfiles/html/ I want to move the newest 10 files to /home/thinkcode/Test I have this so far. Please correct me. I am looking for a one-liner! ls -lt *.htm | head -10 | awk '{print "cp "$1" &quo
I'm playing with awk and I'm trying to swap the first two fields of a file, like so: awk -F : '/cjares/{temp=$1; $1=$2; $2=temp; print}' /etc/passwd However, the output is not right. These are the two outputs, one without swapping the fields, the sec
I have a text file that looks like this: A A A G A A A A A A A A G A G A G G A G G G G G G A A A A A T C T C C C A A A G A A C C C C C C T G G G G G T T T T T T I want to count the number of occurrences of each letter by row. There is a fair bit of d
I'm trying something new, I would normally do this in C# or VB. But for speed reason I'd like to do this on my server. Open File terms.txt Take each item one at a time from terms.txt and open a url (possibly curl or something else) and go to http://s