Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/2068816.2068840acmconferencesArticle/Chapter ViewAbstractPublication PagesimcConference Proceedingsconference-collections
research-article

Suspended accounts in retrospect: an analysis of twitter spam

Published: 02 November 2011 Publication History
  • Get Citation Alerts
  • Abstract

    In this study, we examine the abuse of online social networks at the hands of spammers through the lens of the tools, techniques, and support infrastructure they rely upon. To perform our analysis, we identify over 1.1 million accounts suspended by Twitter for disruptive activities over the course of seven months. In the process, we collect a dataset of 1.8 billion tweets, 80 million of which belong to spam accounts. We use our dataset to characterize the behavior and lifetime of spam accounts, the campaigns they execute, and the wide-spread abuse of legitimate web services such as URL shorteners and free web hosting. We also identify an emerging marketplace of illegitimate programs operated by spammers that include Twitter account sellers, ad-based URL shorteners, and spam affiliate programs that help enable underground market diversification.
    Our results show that 77% of spam accounts identified by Twitter are suspended within on day of their first tweet. Because of these pressures, less than 9% of accounts form social relationships with regular Twitter users. Instead, 17% of accounts rely on hijacking trends, while 52% of accounts use unsolicited mentions to reach an audience. In spite of daily account attrition, we show how five spam campaigns controlling 145 thousand accounts combined are able to persist for months at a time, with each campaign enacting a unique spamming strategy. Surprisingly, three of these campaigns send spam directing visitors to reputable store fronts, blurring the line regarding what constitutes spam on social networks.

    References

    [1]
    D. Anderson, C. Fleizach, S. Savage, and G. Voelker. Spamscatter: Characterizing internet scam hosting infrastructure. In USENIX Security, 2007.
    [2]
    F. Benevenuto, G. Magno, T. Rodrigues, and V. Almeida. Detecting Spammers on Twitter. In Proceedings of the Conference on Email and Anti-Spam (CEAS), 2010.
    [3]
    bit.ly. Spam and Malware Protection. 2009. http://blog.bit.ly/post/138381844/spam-and-malware-protection.
    [4]
    G. Danezis and P. Mittal. Sybilinfer: Detecting sybil nodes using social networks. In Proceedings of the Network and Distributed System Security Symposium (NDSS), 2009.
    [5]
    H. Gao, J. Hu, C. Wilson, Z. Li, Y. Chen, and B. Zhao. Detecting and characterizing social spam campaigns. In Proceedings of the Internet Measurement Conference (IMC), 2010.
    [6]
    C. Grier, K. Thomas, V. Paxson, and M. Zhang. @spam: the underground on 140 characters or less. In Proceedings of the ACM Conference on Computer and Communications Security (CCS), 2010.
    [7]
    T. Holz, C. Gorecki, F. Freiling, and K. Rieck. Detection and mitigation of fast-flux service networks. In Proceedings of the 15th Annual Network and Distributed System Security Symposium (NDSS), 2008.
    [8]
    HootSuite. Kapow! HootSuite Fights the Evils of Phishing, Malware, and Spam. 2010. http://blog.hootsuite.com/hootsuite-fights-malware-phishing/.
    [9]
    C. Kanich, C. Kreibich, K. Levchenko, B. Enright, G. Voelker, V. Paxson, and S. Savage. Spamalytics: An empirical analysis of spam marketing conversion. In Proceedings of the 15th ACM Conference on Computer and Communications Security, 2008.
    [10]
    B. Krebs. Battling the zombie web site armies. https://krebsonsecurity.com/2011/01/battling-the-zombie-web-site-armies/#more-7522, 2011.
    [11]
    C. Kreibich, C. Kanich, K. Levchenko, B. Enright, G. Voelker, V. Paxson, and S. Savage. Spamcraft: An inside look at spam campaign orchestration. In USENIX Workshop on Large-Scale Exploits and Emergent Threats (LEET), 2009.
    [12]
    K. Lee, J. Caverlee, and S. Webb. Uncovering social spammers: social honeypots machine learning. In Proceeding of the International ACM SIGIR Conference on Research and Development in Information Retrieval, 2010.
    [13]
    K. Levchenko, A. Pitsillidis, N. Chachra, B. Enright, M. Felegyhazi, C. Grier, T. Halvorson, C. Kanich, C. Kreibich, H. Liu, D. McCoy, N. Weaver, V. Paxson, G. M. Voelker, and S. Savage. Click Trajectories: End-to-End Analysis of the Spam Value Chain. In Proceedings of the IEEE Symposium on Security and Privacy, May 2011.
    [14]
    M. Motoyama, D. McCoy, K. Levchenko, S. Savage, and G. M. Voelker. Dirty jobs: The role of freelance labor in web service abuse. In Proceedings of the 20th USENIX Security Symposium, 2011.
    [15]
    A. Ramachandran, N. Feamster, and S. Vempala. Filtering spam with behavioral blacklisting. In Proceedings of the 14th ACM Conference on Computer and Communications Security, 2007.
    [16]
    S. Sinha, M. Bailey, and F. Jahanian. Shades of grey: On the effectiveness of reputation-based blacklists. In 3rd International Conference on Malicious and Unwanted Software, 2008.
    [17]
    J. Song, S. Lee, and J. Kim. Spam filtering in twitter using sender-receiver relationship. In Proceedings of International Symposium on Recent Advances in Intrusion Detection (RAID), 2011.
    [18]
    B. Stone-Gross, R. Abman, R. Kemmerer, C. Kruegel, D. Steigerwald, and G. Vigna. The Underground Economy of Fake Antivirus Software. In Proceedings of the Workshop on Economics of Information Security (WEIS), 2011.
    [19]
    G. Stringhini, C. Kruegel, and G. Vigna. Detecting Spammers on Social Networks. In Proceedings of the Annual Computer Security Applications Conference (ACSAC), 2010.
    [20]
    K. Thomas, C. Grier, J. Ma, V. Paxson, and D. Song. Design and Evaluation of a Real-time URL Spam Filtering Service. In Proceedings of the 32nd IEEE Symposium on Security and Privacy, 2011.
    [21]
    Twitter. The Twitter Rules. http://support.twitter.com/entries/18311-the-twitter-rules, 2010.
    [22]
    Twitter. Twitter API wiki. http://dev.twitter.com/doc, 2010.
    [23]
    Twitter. Numbers. http://blog.twitter.com/2011/03/numbers.html, March 2011.
    [24]
    Twitter. OAuth FAQ. https://dev.twitter.com/docs/auth/oauth/faq, 2011.
    [25]
    Twitter. Terms of service, May 2011. http://twitter.com/tos.
    [26]
    H. Yu, M. Kaminsky, P. Gibbons, and A. Flaxman. Sybilguard: defending against sybil attacks via social networks. ACM SIGCOMM Computer Communication Review, 2006.

    Cited By

    View all
    • (2024)Fake Social Media Profile Detection and Reporting Using Machine LearningInternational Journal of Advanced Research in Science, Communication and Technology10.48175/IJARSCT-16695(465-470)Online publication date: 30-Mar-2024
    • (2024)Understanding Characteristics of Phishing Reports from Experts and Non-Experts on TwitterIEICE Transactions on Information and Systems10.1587/transinf.2023EDP7221E107.D:7(807-824)Online publication date: 1-Jul-2024
    • (2024)Investigating Influential Users' Responses to Permanent Suspension on Social MediaProceedings of the ACM on Human-Computer Interaction10.1145/36373568:CSCW1(1-41)Online publication date: 26-Apr-2024
    • Show More Cited By

    Index Terms

    1. Suspended accounts in retrospect: an analysis of twitter spam

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      cover image ACM Conferences
      IMC '11: Proceedings of the 2011 ACM SIGCOMM conference on Internet measurement conference
      November 2011
      612 pages
      ISBN:9781450310130
      DOI:10.1145/2068816
      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Sponsors

      In-Cooperation

      • USENIX Assoc: USENIX Assoc

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 02 November 2011

      Permissions

      Request permissions for this article.

      Check for updates

      Author Tags

      1. account abuse
      2. social networks
      3. spam

      Qualifiers

      • Research-article

      Conference

      IMC '11
      IMC '11: Internet Measurement Conference
      November 2 - 4, 2011
      Berlin, Germany

      Acceptance Rates

      Overall Acceptance Rate 277 of 1,083 submissions, 26%

      Upcoming Conference

      IMC '24
      ACM Internet Measurement Conference
      November 4 - 6, 2024
      Madrid , AA , Spain

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)52
      • Downloads (Last 6 weeks)8
      Reflects downloads up to 27 Jul 2024

      Other Metrics

      Citations

      Cited By

      View all
      • (2024)Fake Social Media Profile Detection and Reporting Using Machine LearningInternational Journal of Advanced Research in Science, Communication and Technology10.48175/IJARSCT-16695(465-470)Online publication date: 30-Mar-2024
      • (2024)Understanding Characteristics of Phishing Reports from Experts and Non-Experts on TwitterIEICE Transactions on Information and Systems10.1587/transinf.2023EDP7221E107.D:7(807-824)Online publication date: 1-Jul-2024
      • (2024)Investigating Influential Users' Responses to Permanent Suspension on Social MediaProceedings of the ACM on Human-Computer Interaction10.1145/36373568:CSCW1(1-41)Online publication date: 26-Apr-2024
      • (2024)Identifying Risky Vendors in Cryptocurrency P2P MarketplacesProceedings of the ACM on Web Conference 202410.1145/3589334.3645475(99-110)Online publication date: 13-May-2024
      • (2023)Canary in Twitter Mine: Collecting Phishing Reports from Experts and Non-expertsProceedings of the 18th International Conference on Availability, Reliability and Security10.1145/3600160.3600163(1-12)Online publication date: 29-Aug-2023
      • (2023)Preemptive Detection of Fake Accounts on Social Networks via Multi-Class Preferential Attachment ClassifiersProceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining10.1145/3580305.3599471(105-116)Online publication date: 6-Aug-2023
      • (2023)Hate Raids on Twitch: Echoes of the Past, New Modalities, and Implications for Platform GovernanceProceedings of the ACM on Human-Computer Interaction10.1145/35796097:CSCW1(1-28)Online publication date: 16-Apr-2023
      • (2023)Understanding and Detecting Abused Image Hosting Modules as Malicious ServicesProceedings of the 2023 ACM SIGSAC Conference on Computer and Communications Security10.1145/3576915.3623143(3213-3227)Online publication date: 15-Nov-2023
      • (2023)Misbehavior and Account Suspension in an Online Financial Communication PlatformProceedings of the ACM Web Conference 202310.1145/3543507.3583385(2686-2697)Online publication date: 30-Apr-2023
      • (2023)Markov-Driven Graph Convolutional Networks for Social Spammer DetectionIEEE Transactions on Knowledge and Data Engineering10.1109/TKDE.2022.315066935:12(12310-12322)Online publication date: 1-Dec-2023
      • Show More Cited By

      View Options

      Get Access

      Login options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media