###
首先是第7题和第10题,
第七题,
使用抓包工具,重放攻击,
步骤,
1,找到包含数据的url,
2,对这个url,进行重放攻击,发现403,
3,检查cookie,两次并没有什么变化,怀疑是这个请求是依赖另一个请求的
4,找到上一个请求cityjson,先请求这个然后再请求我们的数据接口,就成功了
上python代码:
import requests import urllib3 urllib3.disable_warnings() url1 = "https://www.python-spider.com/cityjson" url2 = "https://www.python-spider.com/api/challenge7" cookies = "vaptchaNetway=cn; Hm_lvt_337e99a01a907a08d00bed4a1a52e35d=1628248083,1629106799; " "sessionid=g1siko0evn5hmnn3pbgl0vaoqjx29cfo; Hm_lpvt_337e99a01a907a08d00bed4a1a52e35d=1629124377" cookies_dict = {cookie.split("=")[0].strip(): cookie.split("=")[1].strip() for cookie in cookies.split(";")} all_page_sum = [] for i in range(1,101): print("page: ", i) data = {"page": i} res = requests.get(url1, verify=False, cookies=cookies_dict) # 先请求这个url,然后请求下面的url就通了, res2 = requests.post(url2, verify=False, cookies=cookies_dict, data=data) # print(res2.text) page_sum = sum([int(item["value"])for item in res2.json()["data"]]) print([int(item["value"])for item in res2.json()["data"]]) all_page_sum.append(page_sum) print("all_page_sum =", sum(all_page_sum))
第十题:
使用抓包工具,重放攻击,
步骤,
1,找到包含数据的url,
2,对这个url,进行重放攻击,发现直接可以拿到数据,说明这个是独立的,我们就可以去用python发起请求了,
3,使用python发起请求,发现是失败的,
4,对比Charles重放攻击发起的请求和python代码发起的请求,发现cookie的顺序不一致,
5,把python代码的cookie顺序,调整为和重放攻击一直的cookie顺序,发送请求,成功,
上python代码:
import requests import urllib3 urllib3.disable_warnings() url1 = "https://www.python-spider.com/api/challenge10" cookies = "vaptchaNetway=cn; Hm_lvt_337e99a01a907a08d00bed4a1a52e35d=1628248083,1629106799; " "sessionid=g1siko0evn5hmnn3pbgl0vaoqjx29cfo; Hm_lpvt_337e99a01a907a08d00bed4a1a52e35d=1629124377" headers = { "Host": "www.python-spider.com", "Connection": "keep-alive", "Content-Length": "0", "Origin": "https://www.python-spider.com", "User-Agent": "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/78.0.3904.108 Safari/537.36", "Accept": "*/*", "Referer": "https://www.python-spider.com/challenge/10", "Accept-Encoding": "gzip, deflate", "Accept-Language": "zh-CN,zh;q=0.9", } cookies_dict = {cookie.split("=")[0].strip(): cookie.split("=")[1].strip() for cookie in cookies.split(";")} all_page_sum = [] for i in range(1, 101): print("page: ", i) data = {"page": i} session = requests.session() session.headers = headers # 关键是这一步,让headers有顺序的请求, res2 = session.post(url1, verify=False, cookies=cookies_dict, data=data) page_sum = sum([int(item["value"])for item in res2.json()["data"]]) print([int(item["value"])for item in res2.json()["data"]]) all_page_sum.append(page_sum) print("all_page_sum =", sum(all_page_sum))
####
第一题和第二题,
第一题,找到数据接口,第一页的接口重放攻击是正常的,但是第二页以后的接口重放攻击就是异常的了,
查看正常的和异常的两次请求,正常的请求头里面有一个safe参数,而且每次请求都是变化的,
既然每次都变化,怀疑是和随机数和时间有关,判断是js加密了这个safe参数放到了请求头,然后传递到后端,然后后端校验了这个字段,
所以要找到js是如何加密这个safe的,
var a = ‘9622’;
var timestamp = String(Date.parse(new Date()) / 1000);
var tokens = hex_md5(window.btoa(a + timestamp));
request.setRequestHeader(“safe”, tokens);
request.setRequestHeader(“timestamp”, timestamp)
所以这个safe的逻辑就是:时间/1000,然后+9622,然后window.btoa,然后md5加密,
注意1,时间是可以在请求头里面找到的,这个不要自己傻傻的写ew Date()
注意2,window.btoa,这个对应的就是python里面的base64加密
注意3,md5加密,对应的就是python里面的md5加密,
注意4,md5加密是可能网站对这个库进行改动的,如何判断是用的原生的md5,还是改动的,就是使用debug把加密之前的拿到,然后你自己md5加密看看,和他的是不是一样就知道了
上python代码
import requests import urllib3 import base64 import hashlib import time import re import json urllib3.disable_warnings() url = "https://www.python-spider.com/cityjson" cookies = "vaptchaNetway=cn; Hm_lvt_337e99a01a907a08d00bed4a1a52e35d=1628248083,1629106799;" " sessionid=a7ckvdtsz5p6i1udfggnkn5tk6je3dgr; _i=MTYyOTI2NDQ3M35ZV2xrYVc1blgzZHBiakUyTWpreU5qUTBOek16TXpR" "PXw1MmRkNzJhMDk4NDNkNGRmNz$wNDM1Zj$xYjhiOTBlYQ; " "_v=TVRZeU9USTJORFEzTTM1WlYyeHJZVmMxYmxnelpIQmlha1V5VFdwcmVVNX" "FVVEJPZWsxNlRYcFJQWHcxTW1Sa056SmhNRGs0TkROa05HUm1OeiR3TkRNMVpqJHhZamhpT1R$bFlR; " "sign=1629264618748~ca1c4ad08c0e246bfc23632a09b1ef64; Hm_lpvt_337e99a01a907a08d00bed4a1a52e35d=1629264744" cookies_dict = {cookie.split("=")[0].strip(): cookie.split("=")[1].strip() for cookie in cookies.split(";")} count_sum = 0 for i in range(1, 86): res = requests.get(url, verify=False, cookies=cookies_dict) # s = 'var returnCitySN = {"cip": "123.112.20.12", "timestamp": "1629274784"};' print(res.text) timestamp = re.findall('"(d{10})"', res.text)[0] # print(timestamp) safe_s = "9622" + timestamp safe_b64 = base64.b64encode(safe_s.encode()) safe_md5 = hashlib.md5(safe_b64).hexdigest() print(safe_md5) # headers = { "Host": "www.python-spider.com", "Connection": "keep-alive", "Content-Length": "0", "timestamp": timestamp, "safe": safe_md5, "Origin": "https://www.python-spider.com", "User-Agent": "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/78.0.3904.108 Safari/537.36", "Accept": "*/*", "Referer": "https://www.python-spider.com/challenge/1", "Accept-Encoding": "gzip, deflate", "Accept-Language": "zh-CN,zh;q=0.9", } # url2 = "https://www.python-spider.com/challenge/api/json?page={}&count=14".format(i) print(url2) res2 = requests.get(url2, verify=False, headers=headers, cookies=cookies_dict) # res2 = res2.content.decode("utf-8") # res2 = json.loads(res2) # print(res2) # print(type(res2)) print(res2.json()["infos"]) for item in res2.json()["infos"]: if "招" in item["message"]: print(item["message"]) count_sum += 1 print("count_sum =",count_sum)
第二题
既然是要找到cookie里面的sign加密字符串,
第一步,hook,cookie看看这个cookie的程序入口在哪里
这个断点调试,太厉害了,
怎么写这个hook,cookie,
document.cookie_bak = document.cookie
Object.defineProperty(document,”cookie”,
{set:function(value){
debugger;
return value}
})
这个时候返回document,说明是hook成功了,
然后调试,往上找堆栈,看哪里调用的,
这个方法有一个问题,就是只能hook运行一次,否则就要报错,你可以重新刷新页面,再执行,
第二步,扣代码
注意1,缺什么,补什么,深度优先,
注意2,使用pycharm调试,需要安装nodejs插件,
注意3,window在nodejs里面是不存在的,所以要使用window = this;的写法
注意4,window.btoa = require(“btoa”);
第三步,制作一个函数,python调用,
先安装一个包,pip install PyExecjs -i http://pypi.douban.com/simple/ –trusted-host pypi.douban.com
上python代码:
import requests import urllib3 import execjs urllib3.disable_warnings() url = "http://www.python-spider.com/challenge/2" with open("./2.js", "r") as f: js_text = f.read() # print(js_text) js = execjs.compile(js_text) cookie = js.call("SDK_2").split(";")[0].replace("sign=","") print(cookie) cookies = { "sessionid": "xm64ecbvpwv036ycfnw07vg6oyqpluxi", "sign": cookie, } session = requests.session() res = session.get(url, verify=False, cookies=cookies) print(res.content.decode())
###
练习第14题,
发现在请求参数里面加了一个uc,
var list = {
“page”: String(num),
“uc”: window.a,
};
这个
window.a是加密在了jsfuck里面了,
我们破解了之后是这样的,
(function anonymous(
) {
window.s = window.a(window.t + ‘|’ + window.num);window.a = window.s;
})
直接上python代码:
import requests import urllib3 import execjs urllib3.disable_warnings() js_text = """ function SDK_14(n){ window = this; num = n // import CryptoJS from "crypto-js"; var CryptoJS = require("crypto-js"); window.num = num; window.k = 'wdf2ff*TG@*(F4)*YH)g430HWR(*)' + 'wse'; window.t = Date.parse(new Date())/1000; window.m = CryptoJS.enc.Utf8.parse(window.k); window.a = function(word){ var srcs = CryptoJS.enc.Utf8.parse(word); var encrypted = CryptoJS.AES.encrypt(srcs, window.m, { mode: CryptoJS.mode.ECB, padding: CryptoJS.pad.Pkcs7 }); return encrypted.toString(); }; window.s = window.a(window.t + '|' + window.num); window.a = window.s; // // (function anonymous( // ) { // window.s = window.a(window.t + '|' + window.num);window.a = window.s; // }) // console.log(window.a) return window.a } """ url = "https://www.python-spider.com/api/challenge14" cookies = "vaptchaNetway=cn; Hm_lvt_337e99a01a907a08d00bed4a1a52e35d=1628248083,1629106799;" " sessionid=a7ckvdtsz5p6i1udfggnkn5tk6je3dgr; _i=MTYyOTI2NDQ3M35ZV2xrYVc1blgzZHBiakUyTWpreU5qUTBOek16TXpR" "PXw1MmRkNzJhMDk4NDNkNGRmNz$wNDM1Zj$xYjhiOTBlYQ; " "_v=TVRZeU9USTJORFEzTTM1WlYyeHJZVmMxYmxnelpIQmlha1V5VFdwcmVVNX" "FVVEJPZWsxNlRYcFJQWHcxTW1Sa056SmhNRGs0TkROa05HUm1OeiR3TkRNMVpqJHhZamhpT1R$bFlR; " "sign=1629264618748~ca1c4ad08c0e246bfc23632a09b1ef64; Hm_lpvt_337e99a01a907a08d00bed4a1a52e35d=1629264744" cookies_dict = {cookie.split("=")[0].strip(): cookie.split("=")[1].strip() for cookie in cookies.split(";")} all_sum = [] import time for i in range(1, 101): # time.sleep(1) print("page = ",i) js = execjs.compile(js_text) uc = js.call("SDK_14", i) # print(uc) data = { "page": i, "uc": uc } res = requests.post(url, verify=False, data=data,cookies=cookies_dict) print(res.text) page_sum = sum([int(item_dict["value"]) for item_dict in res.json()["data"]]) all_sum.append(page_sum) print("all_sum =",sum(all_sum))
###
第16题,使用了表情包加密,
先破解表情包加密
window.localStorage.setItem(‘a’ , String(Date.parse(new Date()) / 1000));
a = window.localStorage.getItem(‘a’);
window.localStorage.setItem(‘token’, window.btoa(a)+(‘|’)+binb2b64(hex_sha1(window.btoa(core_sha1(a)))) + b64_sha1(a));
token = window.localStorage.getItem(‘token’);
然后扣代码
注意格式化校验的问题,
python代码
import requests import urllib3 import execjs urllib3.disable_warnings() url = "https://www.python-spider.com/api/challenge16" cookies = "vaptchaNetway=cn; Hm_lvt_337e99a01a907a08d00bed4a1a52e35d=1628248083,1629106799;" " sessionid=a7ckvdtsz5p6i1udfggnkn5tk6je3dgr; _i=MTYyOTI2NDQ3M35ZV2xrYVc1blgzZHBiakUyTWpreU5qUTBOek16TXpR" "PXw1MmRkNzJhMDk4NDNkNGRmNz$wNDM1Zj$xYjhiOTBlYQ; " "_v=TVRZeU9USTJORFEzTTM1WlYyeHJZVmMxYmxnelpIQmlha1V5VFdwcmVVNX" "FVVEJPZWsxNlRYcFJQWHcxTW1Sa056SmhNRGs0TkROa05HUm1OeiR3TkRNMVpqJHhZamhpT1R$bFlR; " "sign=1629264618748~ca1c4ad08c0e246bfc23632a09b1ef64; Hm_lpvt_337e99a01a907a08d00bed4a1a52e35d=1629264744" cookies = {cookie.split("=")[0].strip(): cookie.split("=")[1].strip() for cookie in cookies.split(";")} all_sum = [] for i in range(1,101): print("page = ",i) with open("./16.js","r") as f: js_text = f.read() # print(js_text) js = execjs.compile(js_text) js_safe = js.call("SDK_16") # print(js_safe) headers = { "Host": "www.python-spider.com", "Connection": "keep-alive", "Content-Length": "6", "safe": js_safe, "Origin": "https://www.python-spider.com", "User-Agent": "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/78.0.3904.108 Safari/537.36", "Accept": "*/*", "Referer": "https://www.python-spider.com/challenge/16", "Accept-Encoding": "gzip, deflate", "Accept-Language": "zh-CN,zh;q=0.9", "x-requested-with": "XMLHttpRequest", } data = { "page": i } res = requests.post(url, verify=False, headers = headers,data=data, cookies=cookies) print(res.text) page_sum = sum([int(item_dict["value"]) for item_dict in res.json()["data"]]) # print(page_sum) all_sum.append(page_sum) print("all_sum =",sum(all_sum))
####
###