0

Hello i have written this script to try to automate login process to the eToro server and after that grab the profit and equity values of the portfolio server.

def get_profit():

profit = equity = ''

try:
    options = webdriver.ChromeOptions()
    options.add_argument('--headless')              # Runs Chrome in headless mode.
    options.add_argument('--no-sandbox')            # Bypass OS security model
    options.add_argument('--disable-automation')
    options.add_argument('--disable-extensions')

    # Create new session
    driver = webdriver.Chrome( options=options, executable_path='/usr/bin/chromedriver' )
    driver.get( 'https://etoro.com/portfolio' )

    time.sleep(2)

    driver.find_element_by_id('username').send_keys('my_username')
    driver.find_element_by_id('password').send_keys('my_password')
    driver.find_element_by_css_selector('button.ng-binding').click()

    time.sleep(2)
    driver.save_screenshot( 'static/img/etoro.png' )

    profit = driver.find_element_by_xpath( '/html/body/ui-layout/div/div/footer/et-account-balance/div/div[5]/span[1]' ).text
    equity = driver.find_element_by_xpath( '/html/body/ui-layout/div/div/footer/et-account-balance/div/div[7]/span[1]' ).text
    driver.quit()
except Exception as e:
    profit = repr(e)

return profit, equity

Problem is that iam constantly getting the same error message which is NoSuchElementException('no such element: Unable to locate element: {"method":"xpath","selector":"/html/body/ui-layout/div/div/footer/et-account-balance/div/div[5]/span[1]"}\n (Session info: headless chrome=86.0.4240.22)', None, ['#0 0x55a2e8090d99 ', ''])

You can see this output if you try to run my web app script at http://superhost.gr/portfolio Before some days this script was able to grab these 2 values by running successfully once in every half hour or so, the rest of the times was failing to....but now that i speak it cannot longer access the website at all and i don't know why.

1 Answers1

1

The page is loading and will miss the elements use waits to let the elements load first and than grab them.

from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC

profit =  WebDriverWait(driver, 20).until(
EC.visibility_of_element_located((By.XPATH, "/html/body/ui-layout/div/div/footer/et-account-balance/div/div[5]/span[1]"))).text
Arundeep Chohan
  • 9,779
  • 5
  • 15
  • 32
  • Also this is the screenshot iam taking in order to see in which state was the website when i tried to grab the values. http://superhost.gr/static/img/etoro.png ***It says an error is occured please try again, and that at first in the login page.*** – Νικόλαος Βέργος Sep 05 '20 at 18:16
  • ```profit = WebDriverWait(driver, 5).until(EC.visibility_of_element_located((By.CSS_SELECTOR, '.ng-binding'))) driver.find_element_by_id('username').send_keys('myuser') driver.find_element_by_id('password').send_keys('mypass') driver.find_element_by_css_selector('button.ng-binding').click() profit = WebDriverWait(driver, 5).until(EC.visibility_of_element_located((By.XPATH, '/html/body/ui-layout/div/div/footer/et-account-balance/div/div[5]/span[1]'))).text``` but now iam receiving this error ***TimeoutException('', None, None)*** – Νικόλαος Βέργος Sep 05 '20 at 18:18
  • ```profit = WebDriverWait(driver, 5).until(EC.visibility_of_element_located((By.CSS_SELECTOR, 'button.ng-binding')))``` That didnt help either. – Νικόλαος Βέργος Sep 05 '20 at 18:26
  • Increase the timeout from 5. – Arundeep Chohan Sep 07 '20 at 19:19
  • I tried 20 and 30 secs but still i get "Timeout Excepetion" and the screenshot it says an error occured. Somehow the eToro page identifies selenium and blocks its access to the login page....I dont know what to do to workaround that. – Νικόλαος Βέργος Sep 08 '20 at 20:20
  • 1
    The reason for this is that the HTML source code seen by Selenium is actually different than what is indeed there. eToro uses JavaScript rendering heavily, hence making web scraping tougher. I am also researching the very same topic and the only advice for now that I can give you is to look into JavaScript scraping and trying to retrieve data from AJAX web calls. – lazarea Oct 19 '20 at 12:43
  • @lazarea: Hello, i was wondering if you had any luck utilizing javascript scraping. I still cannot make it work from my side. Have you got any luck? – Νικόλαος Βέργος Jul 26 '21 at 12:16