微信公众号搜"智元新知"关注
微信扫一扫可直接关注哦!

linux – 优化大文件的awk命令

我有这些函数来处理2GB的文本文件.我将它分成6个部分进行同步处理,但仍然需要4个小时.

我还能尝试什么使脚本更快?

一些细节:

>我将输入csv输入到while循环中以逐行读取.
>我从read2col函数中的4个字段中获取了csv行的值
>我的mainf函数中的awk获取read2col中的值并进行一些算术运算.我将结果四舍五入到小数点后两位.然后,将该行打印到文本文件.

样本数据:

"111","2018-08-24","01:21","ZZ","AAA","BBB","0","","","ZZ","ZZ111","ZZ110","2018-10-12","07:00","2018-10-12","08:05","2018-10-19","06:30","2018-10-19","09:35","ZZZZ","ZZZZ","A","B","146.00","222.26","76.26","EEE","abc","100.50","45.50","0","E","ESSENTIAL","ESSENTIAL","4","4","7","125","125"

脚本:

read2col()
{
is_one_way=$(echo "$line"| awk -F'","' '{print $7}')
price_outbound=$(echo "$line"| awk -F'","' '{print $30}')
price_exc=$(echo "$line"| awk -F'","' '{print $25}')
tax=$(echo "$line"| awk -F'","' '{print $27}')
price_inc=$(echo "$line"| awk -F'","' '{print $26}')
}


#################################################
#for each line in the csv
mainf()
{
cd $infarepath

while read -r line; do
        #read the value of csv fields into variables
        read2col

        if [[ $is_one_way == 0 ]]; then
                if [[ $price_outbound > 0 ]]; then
                        #calculate price inc and print the entire line to txt file
                        echo $line | awk -v CONVFMT='%.2f' -v pout=$price_outbound -v tax=$tax -F'","' 'BEGIN {OFS = FS} {$25=pout;$26=(pout+(tax / 2)); print}' >>"$csvsplitfile".tmp
                else
                        #divide price ecx and inc by 2 if price outbound is not greater than 0
                        echo $line | awk -v CONVFMT='%.2f' -v pexc=$price_exc -v pinc=$price_inc -F'","' 'BEGIN {OFS = FS} {$25=(pexc / 2);$26=(pinc /2); print}' >>"$csvsplitfile".tmp
                fi
        else
                echo $line >>"$csvsplitfile".tmp
        fi

done < $csvsplitfile
}

解决方法:

你应该做的第一件事就是停止调用六个子shell来为每一行输入运行awk.让我们做一些快速的,背后的计算.

假设您的输入行大约是292个字符(根据您的示例),2G文件将包含略多于730万行.这意味着你正在启动和停止高达四千四百万个进程.

而且,虽然Linux令人钦佩地尽可能高效地处理fork和exec,但它并非没有成本:

pax$time for i in {1..44000000} ; do true ; done
real 1m0.946s

此外,bash还没有真正针对这种处理进行优化,其设计导致这种特定用例的次优行为.有关详细信息,请参阅我们的姐妹网站上的this excellent answer.

下面显示了两种文件处理方法的分析(一个程序读取整个文件(每行只有你好),并且bash一次读取一行).用于获取间的两个命令是:

time ( cat somefile >/dev/null )
time ( while read -r x ; do echo $x >/dev/null ; done <somefile )

对于不同的文件大小(用户系统时间,平均几次运行),它非常有趣:

# of lines   cat-method   while-method
----------   ----------   ------------
     1,000       0.375s         0.031s
    10,000       0.391s         0.234s
   100,000       0.406s         1.994s
 1,000,000       0.391s        19.844s
10,000,000       0.375s       205.583s
44,000,000       0.453s       889.402s

从这一点来看,似乎while方法可以为较小的数据集保留它自己,它实际上不能很好地扩展.

由于awk本身有计算和格式化输出方法,使用一个awk脚本处理文件,而不是你的bash / multi-awk-per-line组合,将使创建所有这些进程和基于行的延迟的成本变得更高远.

这个脚本是一个很好的第一次尝试,我们称之为prog.awk:

BEGIN {
    FMT = "%.2f"
    OFS = FS
}
{
    isOneWay=$7
    priceOutbound=$30
    priceExc=$25
    tax=$27
    priceInc=$26

    if (isOneWay == 0) {
        if (priceOutbound > 0) {
            $25 = sprintf(FMT, priceOutbound)
            $26 = sprintf(FMT, priceOutbound + tax / 2)
        } else {
            $25 = sprintf(FMT, priceExc / 2)
            $26 = sprintf(FMT, priceInc / 2)
        }
    }
    print
}

您只需运行该单个awk脚本:

awk -F'","' -f prog.awk data.txt

使用您提供的测试数据,这里是之前和之后,字段编号为25和26的标记

                                                                                                                                                                                      <-25->   <-26->
"111","2018-08-24","01:21","ZZ","AAA","BBB","0","","","ZZ","ZZ111","ZZ110","2018-10-12","07:00","2018-10-12","08:05","2018-10-19","06:30","2018-10-19","09:35","ZZZZ","ZZZZ","A","B","146.00","222.26","76.26","EEE","abc","100.50","45.50","0","E","ESSENTIAL","ESSENTIAL","4","4","7","125","125"
"111","2018-08-24","01:21","ZZ","AAA","BBB","0","","","ZZ","ZZ111","ZZ110","2018-10-12","07:00","2018-10-12","08:05","2018-10-19","06:30","2018-10-19","09:35","ZZZZ","ZZZZ","A","B","100.50","138.63","76.26","EEE","abc","100.50","45.50","0","E","ESSENTIAL","ESSENTIAL","4","4","7","125","125"

版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 [email protected] 举报,一经查实,本站将立刻删除。

相关推荐