2021-10-07 python 获取连接test学习

该代码段实现了一个Python爬虫,从指定网站抓取标准页面的软件信息。它遍历多个页面,提取每个软件的名称和下载链接,并将结果写入文件。主要涉及的技术包括使用requests获取网页内容,lxml解析HTML,正则表达式匹配下载链接。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

# @Time:2021-9-27 15:21
# coding:utf-8

import re
import requests
import time
from lxml import etree

domain = "http://www.bzmfxz.com"

headers = {
    'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/69.0.3497.100 Safari/537.36'}

f = "lucky.txt"
with open(f,"a") as file:
    page = 68
    while page>0:
        # page = 1
        url = 'http://www.bzmfxz.com/biaozhun/Soft/YDTXBZ/List_' + str(page) + '.html'
        # url = "http://www.bzmfxz.com/biaozhun/Soft/YDTXBZ/List_1.html"
        print(url)
        resp = requests.get(url, headers=headers)
        resp.encoding = 'utf-8'  # 指定字符集
        resp.close()
        # print(resp.text)

        html = etree.HTML(resp.text)
        # lss = html.xpath('//*[@id="main_right_box"]/div[2]/div[3]/div[1]/div')
        lss = html.xpath('//*[@id="main_right_box"]/div[2]/div[3]/div/div')

        for liis in lss:
            title = liis.xpath("./a/text()")
            dec = liis.xpath("./a/@href")
            title = list(item.replace("\r\n", "")for item in title)
            decref = list(item.replace("/biaozhun", "http://www.bzmfxz.com/biaozhun")for item in dec)
            # resp2 = requests.get(decref, headers=headers)
            decrefdw = list(item.replace("/biaozhun", "http://www.bzmfxz.com/biaozhun")for item in decref)


            # ['http://www.bzmfxz.com/Common/ShowDownloadUrl.aspx?urlid=0&id=26420']
            # ['http://www.bzmfxz.com/biaozhun/Soft/YDTXBZ/2008/01/31/26420.html']
            if len(title) > 0:
                newTitle = title[0]
                newDecUrl = decref[0]
                test = re.search(r'([^/]+)\.[h]', newDecUrl)
                newDecDownUrl = "http://www.bzmfxz.com/Common/ShowDownloadUrl.aspx?urlid=0&id=" + test.group(1)

                if len(newDecDownUrl) > 0:
                    response = requests.get(newDecDownUrl)
                    html = response.content
                    datazip = etree.HTML(html).xpath('//*[@id="content"]/table/tr/td/a/@href')[0]
                    file.write(datazip + " " + "\n")
                    # time.sleep(2)
                # print(newTitle)
                # print(newDecUrl)
                # print(newDecDownUrl)
                # print("——————————————————")
                file.write(newTitle + " " + "\n")
                file.write(newDecUrl + " " + "\n")
                file.write(newDecDownUrl + " " + "\n")
                file.write("——————————————————" + "\n")

            # print(decref)


        page -=1
        time.sleep(2)



评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值