Do parsing OLX, collect phone numbers from the pages, the bottom line is that I will ban, how can I fix it?

Do parsing OLX, collect phone numbers from the pages, the bottom line is that I will ban, appears this inscription (instead of a particular page) 5df12dc91d449759575844.png, I tried to use uBlock, at first, working fine, phones are going, everything is fine, but then begins to block the script that opens the text 5df12e96b3d93987907635.png, and then you receive what you see in the first picture.
The question is, can uBlock no longer understand that it is necessary to block, can I ask him (before opening Windows) that need to block?
And should I use proxies in conjunction with uBlock, tried juzat without, but didn't help at all (proxy IPv4)?
Code:
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
import time

f = open('text-for-OLX.txt', 'a', encoding='utf8')
urls = open("input.txt", "r")
for url in urls:

 def get_url(driver):
driver.get(url)
 print("GOT URL")
time.sleep(3)


 def press_cookie_btn(driver):
 cookie_btn = driver.find_element_by_xpath("//div[@class='topinfo rel']"
 "/button[@class='cookie-close abs cookiesBarClose']")
cookie_btn.click()
print("COOKIE")
time.sleep(2)


 def get_content(driver):
try:
time.sleep(1)
 driver.find_element_by_xpath("//span[@class='link spoiler small nowrap']/span").click()
time.sleep(2)
try:
 phone = driver.find_element_by_xpath("//strong[@class='fnormal xx-large']").text
print(phone)
 f.write(phone + '\n')
time.sleep(1)
except:
 phone_1 = driver.find_element_by_xpath("//strong[@class='fnormal xx-large']/span[@class='block'][1]").text
 phone_2 = driver.find_element_by_xpath("//strong[@class='fnormal xx-large']/span[@class='block'][2]").text
 print(phone_1, phone_2)
 f.write(phone_1 + '' + phone_2 + '\n')
time.sleep(1)
except:
pass



 def page_pagination(driver):
 ars = driver.find_elements_by_xpath("//a[@class='marginright5 linkWithHash link detailsLink']")
 urls_1 = []
 for ar in ars:
 url_1 = ar.get_attribute("href")
urls_1.append(url_1)
 for url_2 in urls_1:
driver.get(url_2)
time.sleep(3)
get_content(driver)
time.sleep(3)

 def pages_pagination(driver, last_elem):
page_pagination(driver)
 for i in range(2, int(last_elem)+1):
driver.get(url+"/?page="+str(i))
page_pagination(driver)




 def main():
 options = options()
 options.add_argument('user-agent=Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.7.12) Gecko/20050915 Firefox/1.0.7')
options.add_extension("D:\\UB\\cjpalhdlnbpafiamejdnhcphjbkeiagm.crx")
 driver = webdriver.Chrome(options=options)
driver.implicitly_wait(10)
get_url(driver)
try:
 last_elem = driver.find_element_by_xpath("//span[@class='fleft ' item'][last()]")
except:
pass
press_cookie_btn(driver)
try:
 pages_pagination(driver, last_elem)
except:
page_pagination(driver)
driver.quit()


main()

urls.close()
f.close()
April 3rd 20 at 17:33
2 answers
April 3rd 20 at 17:35
Do parsing OLX, collect phone numbers from the pages, the bottom line is that I will ban, how can I fix it?
Stop to parse without understanding the process.
What do you mean without an understanding of the process?
Do not think anything bad, but from the phrase you quoted, at least I myself can not make such a conclusion.
Beforehand grateful for the help and criticism) - nadia.Johns34 commented on April 3rd 20 at 17:38
@nadia.Johns34, I will explain 1 time.
If You have a difference in normal operation with the site and working with the same site through a parser, so You don't look like a normal user with a browser.
To avoid problems - you need to have Your parser was indistinguishable from a standard browser with a standard user.
To do this you need to check network requests and responses, as well as (if necessary!), the work of js-scripts website. - Candelario_Koss commented on April 3rd 20 at 17:41
April 3rd 20 at 17:37
proxy in Selenium, so to say, not very original solution
worse only proxy selenium under your account ))

but Google works )), of course, not in the forehead

Find more questions by tags PythonParsingSelenium