Awk summarizes several files show rows that do not appear on both sets of files

advertisements

I have been using awk to sum up multiple files, this is used to sum up the summary of server log parsing values, it really does speed up the final overall count but I have hit a minor problem and the typical examples I have hit on the web have not helped.

Here is the example:

cat file1
aa 1
bb 2
cc 3
ee 4

cat file2
aa 1
bb 2
cc 3
dd 4

cat file3
aa 1
bb 2
cc 3
ff 4

And the script:

cat test.sh
#!/bin/bash

files="file1 file2 file3"

i=0;
oldname="";
for names in $(echo $files); do
        ((i++));
        if [ $i == 1 ]; then
                oldname=$names
                #echo "-- $i $names"
                shift;
        else
               oldname1=$names.$$
        awk  'NR==FNR { _[$1]=$2 } NR!=FNR { if(_[$1] != "") nn=0; nn=($2+_[$1]); print $1" "nn }' $names $oldname> $oldname1
        if [ $i -gt 2 ]; then
            rm $oldname;
        fi
                oldname=$oldname1

    fi
done
echo "------------------------------ $i"
cat $oldname

When I run this, the identical columns are added up but those that appear only in one of the files does not

./test.sh
------------------------------ 3
aa 3
bb 6
cc 9
ee 4

ff dd does not appear in the list, from what I have seen its within the NR==FR

I have come across this:

http://dbaspot.com/shell/246751-awk-comparing-two-files-problem.html

you want all the lines in file1 that are not in file2,
awk 'NR == FNR { a[$0]; next } !($0 in a)' file2 file1

If you want only uniq lines in file1 that are not in file2,
awk 'NR == FNR { a[$0]; next } !($0 in a) { print; a[$0] }'
file2
file1

but this only complicates current issue further when attempted since lots of other fields get duplicated

After posting question - updates to the content ... and tests....

I wanted to stick with awk since it does appear to be a much shorter way of achieving result there is a problem still..

awk '{a[$1]+=$2}END{for (k in a) print k,a[k]}'  file1 file2 file3
aa 3
bb 6
cc 9
ee 4
ff 4
gg 4
RESULT_SET_4 0
RESULT_SET_3 0
RESULT_SET_2 0
RESULT_SET_1 0
$ cat file1
RESULT_SET_1
aa 1
RESULT_SET_2
bb 2
RESULT_SET_3
cc 3
RESULT_SET_4
ff 4
$ cat file2
RESULT_SET_1
aa 1
RESULT_SET_2
bb 2
RESULT_SET_3
cc 3
RESULT_SET_4
ee 4

The file content is not left as it was originally i.e. the results are not under the headings, my original method did keep it all intact

Updated expected output - headings in correct context

cat file1
RESULT_SET_1
aa 1
RESULT_SET_2
bb 2
RESULT_SET_3
cc 3
RESULT_SET_4
ff 4

cat file2
RESULT_SET_1
aa 1
RESULT_SET_2
bb 2
RESULT_SET_3
cc 3
RESULT_SET_4
ee 4

cat file3
RESULT_SET_1
aa 1
RESULT_SET_2
bb 2
RESULT_SET_3
cc 3
RESULT_SET_4
gg 4
test.sh awk line to produce above is :

awk -v i=$i 'NR==FNR { _[$1]=$2 } NR!=FNR { if (_[$1] != "") { if  ($2 ~ /[0-9]/)   { nn=($2+_[$1]); print $1" "nn; } else { print;} }else { print; } }' $names $oldname> $oldname1

./test.sh
------------------------------ 3
RESULT_SET_1
aa 3
RESULT_SET_2
bb 6
RESULT_SET_3
cc 9
RESULT_SET_4
ff 4

works but destroys required formatting

  awk '($2 != "")  {a[$1]+=$2};  ($2 == "") {  a[$1]=$2 } END {for (k in a) print k,a[k]} '  file1 file2 file3
    aa 3
    bb 6
    cc 9
    ee 4
    ff 4
    gg 4
    RESULT_SET_4
    RESULT_SET_3
    RESULT_SET_2
    RESULT_SET_1


$ awk '{a[$1]+=$2}END{for (k in a) print k,a[k]}' file1 file2 file3 | sort
aa 3
bb 6
cc 9
dd 4
ee 4
ff 4

Edit:

It's a bit of a hack but it does the job:

$ awk 'FNR==NR&&!/RESULT/{a[$1]=$2;next}($1 in a){a[$1]+=$2}END{for (k in a) print k,a[k]}' file1 file2 file3 | sort | awk '$1="RESULTS_SET_"NR"\n"$1'
RESULTS_SET_1
aa 3
RESULTS_SET_2
bb 6
RESULTS_SET_3
cc 9
RESULTS_SET_4
ff 4