Social networks may be employed to commit cybercrimes such as hate promotion or violence instigation. These platforms have become a scenario where Law Enforcement Agencies (LEA) must pay close attention to detect cybercrimes and monitor public safety. This paper proposes the adoption of the Natural Language Processing (NLP) models, especially a similarity model, to identify suspicious activities in social networks allowing to identify and prevent related cybercrimes. An LEA can use NLP models to find similar posts grouped in clusters, to determine their level of polarity, to find connections between users and to identify a subset of user accounts that promotes Hostile Social Manipulation (HSM), so such accounts can be reviewed extensively as part of an effort to prevent crimes.