How to extract strings from 4 different files, and insert into a single line of an HTML document

Asked

Viewed 54 times

1

For hours I’ve been trying to find a way to include the contents of 4 distinct files for a new output file -; "output.html". The 4 files I’m looking to extract information from are:

  • link.txt
  • photo.txt
  • caption.txt
  • name.txt

Important! - since the files and their respective contents are on my back, that is, on my machine. And it would not be feasible to post everything here.

I ask anyone who can give that task force, to create the files on your PC: with the command touch or echo

Add dummy links to the link.txt file. Example:

Add names to photos in photo.txt file. Example:

  • 1.png
  • 2.png
  • 3.png
  • 4.png
  • 5.png
  • 6.png
  • 7.png
  • 8.png

Add any subtitles to the subtitle file.txt. Example:

  • Agent 01
  • Agent 02
  • Agent 03
  • Agent 04
  • Agent 05
  • Agent 06
  • Agent 07
  • Agent 08

And finally, name it, add it inside the.txt file name. Example:

  • Marina
  • Truthful
  • Kiki
  • Nautila
  • Keila
  • Brenda
  • Patrick
  • Nilton

I know the real problem is the way I’m trying to do it. See the code:

Shell Script

#!/bin/sh
#
# apagar arquivo, se existir
[ -e saida.html ] && rm -f saida.html

cat link.txt | while read LINK
do 
  cat foto.txt | while read FIGURA
  do 
    cat legenda.txt  | while read LEGENDA
    do      
    cat nome.txt  | while read NOME
    do
        echo -e "<html>\n<body>\n<a href='$LINK'><img src='$LINK/$FIGURA' alt='$LEGENDA' title='$NOME'/></a>\n</body>\n</html>"
      done
    done
  done
done >> saida.html

cat saida.html

The problem is how to insert each line in a new line of the final file

This output file [the final -or- new file] is generated with doubling several times until the end of the last loop is exhausted(while).

Does anyone know a way to fix this? Or another better method, to be able to do this?

1 answer

2


From what I can understand, there is a corresponding line in each text file, so you can make a for of 1 until n, where n represents the total of lines to be covered in each file, something like:

#!/bin/sh
#
# apagar arquivo, se existir
[ -e saida.html ] && rm -f saida.html

total_linhas=$(wc -l link.txt | cut -d' ' -f1)

echo -e "<html>\n<body>" >> saida.html

for linha in $(seq 1 $total_linhas);
do
    link=$(sed "${linha}q;d" link.txt)
    figura=$(sed "${linha}q;d" foto.txt)
    legenda=$(sed "${linha}q;d" legenda.txt)
    nome=$(sed "${linha}q;d" nome.txt)

    echo -e "<a href='$link'><img src='$link/$figura' alt='$legenda' title='$nome'/></a>"
done >> saida.html

echo -e "</body>\n</html>" >> saida.html

Another point is about tags html and body, as it is only necessary to open and close once each, create before the for and soon after, this prevents the generated file from being incorrect.

Reference

File-specific line recovery (sed)

Observing

Utilise head and tail (head -n $linha link.txt | tail -n 1) instead of sed also works, but at least in the tests I did, it was slower.

  • 1

    It was really that. Accept my Up vote followed by my Absolute Vote. Removed my doubts and fixed my problem. Obg!

Browser other questions tagged

You are not signed in. Login or sign up in order to post.