8
I need to make a program that downloads Pdfs from various websites daily and automatically. It is very easy to perform this operation using the C#Webclient command, however, on certain websites it is not possible to find the download URL at all. At the event of click
from the download button, the code of the site calls a Javascript and in no time is generated a link, I have tried to make a webrequest
containing session cookies in an attempt to download the PDF from the server response (I used Fiddler to identify), but was unsuccessful.
Click "Search Diaries" on left corner of the site.
Using DLL Watin, which is a simulator of web browser, I can simulate the click from the button in the browser, but it is not possible to handle the window "Want to save or open the file" from Internet Explorer.
Is there any method to download sites like this?
I am using Selenium, in this simulator it is possible to set the download as automatic, thus skipping the IE download window. Thank you very much for the reply, hug!
– Fernando Medeiros