Reputation Management
Effortlessly and accurately track the web and social network impact of your brand
Once you have your CSV and GEXF files, it’s time to open them in Gephi:
Install Gephi : Download and install Gephi from gephi.org
Open Gephi : Launch the program.
Import the GEXF File
This will load the network of accounts and their interactions into Gephi, allowing you to visualize how the conversation is structured.



Once the GEXF file is imported into Gephi, you can compute the clusters right away. The GEXF contains all the interactions between users on your topic, retweets, replies, mentions, so Gephi can use this network to group accounts into communities.
Here’s how to do it:
Gephi will analyze the network and automatically group accounts into clusters. Each cluster represents a community of accounts that interact frequently, often corresponding to factions or groups involved in the conversation.
Once it’s done, you will see the results in the Appearance view. Click on the palette, then Partition, then modularity_class.
In my results, we see that I have 3 main communities: blue, green, and orange, plus some smaller ones.
But now, the important part is to know what each community is saying. For that, I need to change one thing in the CSV containing my tweets: With Notepad (or your favorite text editor), I will change "screen name" to "Id".
By changing screen name to Id, I can now import my tweets into Gephi too! It’s not the usual way Gephi is meant to be used, it’s not really designed to import content, but with this little hack, Gephi will know which account posted which tweet and, more importantly, which tweets belong to which community.
To import the tweets, use the same method as for the GEXF: File → Import Spreadsheet → Select your CSV → OK/Finish
Be very careful to check the option "Append to existing workspace" at the end of the import. This ensures your tweets are added to your existing project, rather than replacing it.
Now you have:
And… it’s almost over! Just two more things to do: To make good samples of tweets for each community, you’ll want to export the tweets community by community, in separate files. For that, you need to add a filter. By adding a partition filter (like below), you will be able to choose which community you export.
When the filter is added, I click on the first community (the green one in my example) and then click "Play", so that only the green community is selected. You can see in my Context window, at the top, that only 34% of my nodes (the accounts) are selected — because that’s the number of accounts in my green community.
And the last thing in Gephi: the final export!
To export the tweets made by the green accounts, I go to the Data Laboratory, click Export Table, check "Visible only", then go to Options. There, I select "Nodes" and "Attributes", and unselect all attributes except "Text".
With this, you now have a new CSV containing only the text of the tweets made by the green community! And I can post this CSV directly into ChatGPT to get a much more precise output. Even with a very basic prompt like: "This is a sample of tweets about a French political event. Summarize in 5 points maximum. In English."
Here, my green community, representing exactly 4,427 accounts, is mostly talking about Manon Aubry, comparing the September 10 French demonstration with the one in London.
Each of my other communities is probably talking about something else, still related to September 10 (since that’s the topic of my Visibrain project), but with their own unique perspective.
To see the differences between each community, I just have to change my filter, re-export my data into another CSV, and ask ChatGPT to summarize this new sample.
Advantages
This technique is not limited by the scale of data. I’ve used a pretty similar process with 2 million tweets for the September 10 analysis.
Sometimes, we do the same thing at Agoratlas on another sources. With 50 million TikTok comments, thousands of video transcripts, or LinkedIn biographies. The only difference is that we don’t take just one sample, but multiple samples for each community, based on multiple criterias.
We also provide the ID of each post, so the AI can include specific examples in its response, letting us verify its answers as quickly as possible.
It doesn’t replace the human eye, but it provides a truly unique perspective, zooming in multiple times on multiple communities and topics, and processing thousands of posts at each step.
Limitations
All of this technique is essentially a hack of Gephi. We could have done the whole process with just 10 lines of code, automatically clustering, exporting, and even calling ChatGPT.
If you plan to do this kind of analysis on a daily basis, I strongly recommend having someone on your team learn basic Python to automate the workflow without relying on Gephi. If you’re interested, I’d be happy to write a more "dev-oriented" blog post to show how easily this can be done with code (just let me know in the LinkedIn comments!).
And remember: this is still just an AI. It can lie. It can hallucinate. It can be biased. But the simpler the task you give it, the less likely it is to fail.
Summarizing a set of tweets that are likely about the same topic (because they belong to the same community) is much easier for an AI than trying to summarize everything the entire network is saying at once.
Finally, this method makes it much easier for you to verify its answers:
If, in my example, the AI mentions Manon Aubry, I can simply search for "Manon Aubry" in my tweets on Visibrain to check whether that conversation is really happening.
Verification becomes much easier when the AI’s responses are precise, and since you can do this community by community, you can build confidence in the results step by step.
In Agoratlas, the final results are never from a LLM. But it is a part of our toolkits, that can be pretty useful when backup with other tools pointing in the same direction.
And if you’re particularly interested about the French 10 September, you can find our study here.