It seems to me yesterday that I connected The Smith Agent to my Twitter account, perhaps more out of curiosity than out of wanting to do something useful, and today the account has exceeded 200,000 tweets 🙂
How fast these kids grow up!
To celebrate the milestone I decided to write an updated post compared to the previous one in which I told a little about what happens under the hood of the project.
Let’s talk a little about the various components that make up the project.
Zefiro collects information related to internet domains. This process leads to the production of lists of recently registered domains.
Scirocco collects information from Certificate Transparency Logs. This information is useful for identifying new domains and subdomains.
Watson uses Certificate Transparency Logs data to identify domains registered in the last few hours. For its operation this component uses agents distributed in various datacenters around the world.
Miniluv uses data from Zefiro, Scirocco and Watson to select new domains and distribute this information to subscribers, both internal and external to the solution.
Smith Core orchestrates the functioning of the Smith agents by dividing the work on the various distributed components.
Hammer takes care of keeping monitoring active on sites that have some characteristics and that are therefore entrusted to his care.
The Smith agent are in charge of checking the context of the domain, hosting and the contents that the site displays. This information helps to create a score that identifies the possible danger of the site. If the threat is certainly identified, the Twitter report will report the words “Threat …”, if the threat is in doubt the words will be “Possible threat …”. For its operation this component uses agents distributed in various datacenters around the world.
All these components are based on .NET (Framework and Core), the databases are managed by SQL Server. The operating systems used are Windows 2019 and Linux Ubuntu.
One of the main objectives of the platform is the collection of phishing kits and malware.
Currently these files are saved but in the future (hopefully near) they will be shared to create IoCs and datasets to be used for training artificial intelligence models useful for improving threat discovery techniques. The idea is to improve the ability to discover threats using the information contained in threats already discovered.
Another future evolution of the platform will be the integration with email services to report malicious and compromised accounts in order to reduce damage and speed up investigations, as is already the case with some service providers or partners who deal with managing these reports when relevant to them.