Python library for scraping Twitter/X data using GraphQL API with account rotation and session management.
Use this skill when:
pip install twscrape
import asyncio
from twscrape import API, gather
async def main():
api = API() # Uses accounts.db by default
# Add accounts (with cookies - more stable)
cookies = "abc=12; ct0=xyz"
await api.pool.add_account("user1", "pass1", "email@example.com", "mail_pass", cookies=cookies)
# Or add accounts (with login/password - less stable)
await api.pool.add_account("user2", "pass2", "email2@example.com", "mail_pass2")
await api.pool.login_all()
asyncio.run(main())
# Search tweets
await gather(api.search("elon musk", limit=20))
# Get user info
await api.user_by_login("xdevelopers")
user = await api.user_by_id(2244994945)
# Get user tweets
await gather(api.user_tweets(user_id, limit=20))
await gather(api.user_tweets_and_replies(user_id, limit=20))
await gather(api.user_media(user_id, limit=20))
# Get followers/following
await gather(api.followers(user_id, limit=20))
await gather(api.following(user_id, limit=20))
# Tweet operations
await api.tweet_details(tweet_id)
await gather(api.retweeters(tweet_id, limit=20))
await gather(api.tweet_replies(tweet_id, limit=20))
# Trends
await gather(api.trends("news"))
# Parallel scraping
async for tweet in api.search("elon musk"):
print(tweet.id, tweet.user.username, tweet.rawContent)
search(query, limit, kv={})Search tweets by query string.
Parameters:
query (str): Search query (supports Twitter search syntax)limit (int): Maximum number of tweets to returnkv (dict): Additional parameters (e.g., {"product": "Top"} for Top tweets)Returns: AsyncIterator of Tweet objects
Example:
# Latest tweets
async for tweet in api.search("elon musk", limit=20):
print(tweet.rawContent)
# Top tweets
await gather(api.search("python", limit=20, kv={"product": "Top"}))
user_by_login(username)Get user information by username.
Example:
user = await api.user_by_login("xdevelopers")
print(user.id, user.displayname, user.followersCount)
user_by_id(user_id)Get user information by user ID.
followers(user_id, limit)Get user's followers.
following(user_id, limit)Get users that the user follows.
verified_followers(user_id, limit)Get only verified followers.
subscriptions(user_id, limit)Get user's Twitter Blue subscriptions.
tweet_details(tweet_id)Get detailed information about a specific tweet.
tweet_replies(tweet_id, limit)Get replies to a tweet.
retweeters(tweet_id, limit)Get users who retweeted a specific tweet.
user_tweets(user_id, limit)Get tweets from a user (excludes replies).
user_tweets_and_replies(user_id, limit)Get tweets and replies from a user.
user_media(user_id, limit)Get tweets with media from a user.
list_timeline(list_id)Get tweets from a Twitter list.
trends(category)Get trending topics by category.
Categories: "news", "sport", "entertainment", etc.
With cookies (recommended):
cookies = "abc=12; ct0=xyz" # String or JSON format
await api.pool.add_account("user", "pass", "email@example.com", "mail_pass", cookies=cookies)
With credentials:
await api.pool.add_account("user", "pass", "email@example.com", "mail_pass")
await api.pool.login_all()
# Add accounts from file
twscrape add_accounts accounts.txt username:password:email:email_password
# Login all accounts
twscrape login_accounts
# Manual email verification
twscrape login_accounts --manual
# List accounts and status
twscrape accounts
# Re-login specific accounts
twscrape relogin user1 user2
# Retry failed logins
twscrape relogin_failed
proxy = "http://login:pass@example.com:8080"
await api.pool.add_account("user", "pass", "email@example.com", "mail_pass", proxy=proxy)
api = API(proxy="http://login:pass@example.com:8080")
export TWS_PROXY=socks5://user:pass@127.0.0.1:1080
twscrape search "elon musk"
api.proxy = "socks5://user:pass@127.0.0.1:1080"
doc = await api.user_by_login("elonmusk")
api.proxy = None # Disable proxy
Priority: api.proxy > TWS_PROXY env var > account-specific proxy
twscrape search "QUERY" --limit=20
twscrape search "elon musk lang:es" --limit=20 > data.txt
twscrape search "python" --limit=20 --raw # Raw API responses
twscrape user_by_login USERNAME
twscrape user_by_id USER_ID
twscrape followers USER_ID --limit=20
twscrape following USER_ID --limit=20
twscrape verified_followers USER_ID --limit=20
twscrape user_tweets USER_ID --limit=20
twscrape tweet_details TWEET_ID
twscrape tweet_replies TWEET_ID --limit=20
twscrape retweeters TWEET_ID --limit=20
twscrape trends sport
twscrape trends news
twscrape --db custom-accounts.db <command>
async for response in api.search_raw("elon musk"):
print(response.status_code, response.json())
from contextlib import aclosing
async with aclosing(api.search("elon musk")) as gen:
async for tweet in gen:
if tweet.id < 200:
break
user = await api.user_by_id(user_id)
user_dict = user.dict()
user_json = user.json()
from twscrape.logger import set_log_level
set_log_level("DEBUG")
TWS_PROXY: Global proxy for all accounts
Example: socks5://user:pass@127.0.0.1:1080
TWS_WAIT_EMAIL_CODE: Timeout for email verification (default: 30 seconds)
TWS_RAISE_WHEN_NO_ACCOUNT: Raise exception when no accounts available instead of waiting
Values: false, 0, true, 1 (default: false)
user_tweets and user_tweets_and_replies return approximately 3,200 tweets maximum per userThe library automatically:
async def collect_user_data(username):
user = await api.user_by_login(username)
# Collect tweets
tweets = await gather(api.user_tweets(user.id, limit=100))
# Collect followers
followers = await gather(api.followers(user.id, limit=100))
# Collect following
following = await gather(api.following(user.id, limit=100))
return {
'user': user,
'tweets': tweets,
'followers': followers,
'following': following
}
# Language filter
await gather(api.search("python lang:en", limit=20))
# Date filter
await gather(api.search("AI since:2024-01-01", limit=20))
# From specific user
await gather(api.search("from:elonmusk", limit=20))
# With media
await gather(api.search("cats filter:media", limit=20))
async def process_users(usernames):
tasks = []
for username in usernames:
task = api.user_by_login(username)
tasks.append(task)
users = await asyncio.gather(*tasks)
return users
--manual flagtwscrape accountspip install twscrapepip install git+https://github.com/vladkens/twscrape.gitFor detailed API documentation and examples, see the reference files in the references/ directory:
references/installation.md - Installation and setupreferences/api_methods.md - Complete API method referencereferences/account_management.md - Account configuration and managementreferences/cli_usage.md - Command-line interface guidereferences/proxy_config.md - Proxy configuration optionsreferences/examples.md - Code examples and patternsRepository: https://github.com/vladkens/twscrape Stars: 1998+ Language: Python License: MIT