python3爬虫使用代理爬取页面noscript标签问题

本文介绍如何通过设置代理及User-Agent解决Python爬虫遇到的noscript标签问题,实现正常抓取百度等网站的内容。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

python3爬虫使用代理爬取页面noscript标签问题

操作

  • 运行python爬虫,使用代理爬取网页
from urllib.error import URLError  
from urllib.request import ProxyHandler, build_opener  

url = 'https://www.baidu.com'
proxy_handler = ProxyHandler({  
  'http': 'http://127.0.0.1:8888',
  'https': 'https://127.0.0.1:8888'  
})  
opener = build_opener(proxy_handler)
try:  
  response = opener.open(url)
  data = response.read().decode('utf-8')
  # print(data)
  fhandle = open("./baidu.html","wb")
  fhandle.write(data.encode('utf-8'))
  fhandle.close()
except URLError as e:
  print(e.reason)

代理爬取页面

<html>
<head>
        <script>
                location.replace(location.href.replace("https://","http://"));
        </script>
</head>
<body>
        <noscript><meta http-equiv="refresh" content="0;url=http://www.baidu.com/"></noscript>
</body>
</html>

分析原因

  • 爬取百度页面,如果百度不识别浏览器,就会使用location.replace()方法使页面重定向
  • noscript标签,表示无法识别的浏览器

解决方法

  • http协议的headers头,添加浏览器标识即可
from urllib.error import URLError  
from urllib.request import ProxyHandler, build_opener  

url = 'https://www.baidu.com'
headers = (
	{
		"User-Agent","Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/100.0.4896.127 Safari/537.36"
	}
)
proxy_handler = ProxyHandler({  
  'http': 'http://127.0.0.1:8888',
  'https': 'https://127.0.0.1:8888'  
})  
opener = build_opener(proxy_handler)
opener.addheaders = [headers]
try:  
  response = opener.open(url)
  data = response.read().decode('utf-8')
  # print(data)
  fhandle = open("./baidu.html","wb")
  fhandle.write(data.encode('utf-8'))
  fhandle.close()
except URLError as e:
  print(e.reason)
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值