InQ Dashboard Development
Standards Definitions
At the core of TIM’s media analysis system are the standards created by a Standards Working Group, consisting of experts from journalism, academia, media ethics, data science, and law. These professionals collaborate with leading organizations focused on information quality, AI ethics, and media transparency to define standards that ensure media content is evaluated with the highest level of accuracy and trust.
These standards are developed through careful deliberation and reflect best practices in media transparency and information quality.
Transformation of Standards into Classifiers
Once these qualitative standards are defined, they are translated into classifiers, which are machine-readable attributes that can be analyzed by AI models. These classifiers capture the essence of each standard in a way that allows the AI to assess media content. Here’s how each standard is transformed:
Factual Fidelity Classifier: Identifies and cross-checks factual claims against verified
databases and known sources to ensure accuracy.
Provenance Classifier: Evaluates the credibility and transparency of the sources used in the article.
Consistency Classifier: Ensures the content is free of internal contradictions and aligns with other trusted reports.
Timeliness Classifier: Assesses whether the article references recent events and includes up-to-date information.
Sentiment Neutrality Classifier: Detects emotionally charged or biased language, ensuring an objective tone.
Completeness Classifier: Evaluates whether the article thoroughly covers the topic and provides necessary context.
Leveraging OpenAI’s GPT Model for Classification
After the classifiers are defined, TIM’s platform utilizes OpenAI’s GPT model to apply these classifiers in real-time during media analysis:
Natural Language Understanding: GPT reads and comprehends the article’s content, applying the classifiers to evaluate aspects such as factual accuracy, tone, and completeness.
Classifiers in Action: GPT processes the text, identifying key elements related to each classifier, and generates insights about the article's trustworthiness, objectivity, and overall quality.
For example, the Factual Fidelity classifier might prompt GPT to cross-check statements against databases to verify the accuracy of claims. Similarly, the Sentiment Neutrality classifier would analyze the article for emotionally charged language, ensuring that the tone remains objective.
Formatting and API Interaction
Once the article’s data is prepared, TIM’s platform sends it to GPT through an API call. The GPT model applies the classifiers and returns detailed analysis in a structured format:
API Call and Processing: The text is analyzed, and GPT returns descriptive feedback on how well the article aligns with the established standards.
Real-Time Feedback: GPT provides real-time analysis based on the classifiers, offering a comprehensive view of the article’s assessment of elements of trustworthiness without assigning any numerical scores.
Processing and Displaying Results
After GPT completes the analysis, TIM’s platform processes the results for visualization:
Result Storage: The descriptive analysis, such as Factual Fidelity or Provenance insights, is stored along with the original article data.
Dashboard Visualization: The analysis is presented in an intuitive format that focuses on descriptive insights rather than numerical scores, ensuring that users can easily understand the article’s strengths and weaknesses based on leveraged standards.
Continuous Feedback and Model Refinement
TIM’s platform includes a continuous improvement process:
User Feedback: Users can provide feedback on the AI’s analysis, helping to refine the classifiers and improve the system over time.
Model Refinement: As more articles are processed and analyzed, the classifiers are fine-tuned based on real-world feedback, enhancing the platform’s overall accuracy.
InQ Dashboard Data Flow and Processing