Python异步编程实战:5个高效处理IO密集型任务的技巧
1. 基础异步函数与事件循环
理解async/await基本用法:
import asyncio
async def fetch_data():
print("开始获取数据")
await asyncio.sleep(2) # 模拟IO操作
print("数据获取完成")
return {"data": 123}
async def main():
task = asyncio.create_task(fetch_data())
print("执行其他任务")
result = await task
print(f"获取结果: {result}")
asyncio.run(main())
2. 并发执行多个任务
使用gather同时运行多个协程:
async def download_page(url):
print(f"开始下载: {url}")
await asyncio.sleep(1) # 模拟网络请求
print(f"完成下载: {url}")
return f"{url}的内容"
async def main():
urls = ["url1", "url2", "url3"]
tasks = [download_page(url) for url in urls]
results = await asyncio.gather(*tasks)
print(f"所有页面下载完成: {results}")
asyncio.run(main())
3. 异步文件操作
使用aiofiles处理文件IO:
import aiofiles
async def async_write_file():
async with aiofiles.open('data.txt', mode='w') as f:
await f.write("异步写入内容")
print("文件写入完成")
async def async_read_file():
async with aiofiles.open('data.txt', mode='r') as f:
content = await f.read()
print(f"文件内容: {content}")
async def main():
await async_write_file()
await async_read_file()
asyncio.run(main())
4. 异步HTTP客户端
使用aiohttp发送并发请求:
import aiohttp
async def fetch_url(session, url):
async with session.get(url) as response:
return await response.text()
async def main():
urls = [
"https://example.com",
"https://example.org",
"https://example.net"
]
async with aiohttp.ClientSession() as session:
tasks = [fetch_url(session, url) for url in urls]
results = await asyncio.gather(*tasks)
print(f"获取到{len(results)}个页面的内容")
asyncio.run(main())
方法 | 同步版本耗时 | 异步版本耗时 |
---|---|---|
3个网络请求 | 3秒 | 1秒 |
10个文件操作 | 10秒 | 1.5秒 |
数据库查询 | 5秒 | 1.2秒 |
5. 异步数据库访问
使用asyncpg操作PostgreSQL:
import asyncpg
async def query_database():
conn = await asyncpg.connect(
user='user', password='pass',
database='db', host='localhost')
# 执行查询
result = await conn.fetch('SELECT * FROM users WHERE id = $1', 1)
print(f"查询结果: {result}")
await conn.close()
asyncio.run(query_database())
通过合理使用异步编程,Python应用处理IO密集型任务的性能可提升5-10倍,是现代Python开发者必须掌握的核心技能。