Technology
The Threat of Deepfakes: AI and ML in the Fight Against Synthetic Media
The Threat of Deepfakes: AI and ML in the Fight Against Synthetic Media
Deepfakes, a type of synthetic media produced using artificial intelligence have lately been somewhat well-known due to their incredible realism in audio, video, and picture manipulation. Deepfakes, which are now used for public dishonesty, false information distribution, and reputation damage, have progressed from a benign form of online amusement to a major cause of worry. Their increasing complexity makes it more difficult to separate actual from fake information. To keep people’s trust in news sources in today’s fast-paced, social media-driven media environment, deepfakes must be identified and prevented. Given the prospective disruption of political processes, effects on financial markets, and erosion of public trust brought about by deepfakes, effective detection techniques are important. Artificial intelligence (AI) and machine learning (ML) are used to neutralise this threat.
AI driven systems are advancing in their capacity to detect deepfakes using pattern, anomaly, and deviation analysis of audiovisual data. AI uses deep learning and neural networks to enable the recognition of changing material, therefore protecting digital platforms from the dangers of synthetic content. This article covers the significance of artificial intelligence and machine learning in the battle against deepfakes as well as the continuous developments of these technologies.
The Evolution of Deepfakes
From simple picture editing to complex film and audio creation, deepfakes have developed over time. Though the technique originated in early work on face recognition in the 1990s, the word “deepfake” did not initially come up until 2017. Deepfakes might overcome its initial limitation (that of face-swapping in still images) partly due to developments in machine learning, especially Generative Adversarial Networks (GANs). Using a generator of phoney material and a discriminator, the GANs introduced in 2014 aim to distinguish between real and fake. This competitive process continually improves the quality of contents that are generated. By 2018, deepfake technology had progressed to creating convincing video and audio, sparking both awe and concern.
Today, deepfakes can manipulate facial expressions, lip movements, and voice in real-time, blurring the line between reality and fiction. This technology has found applications in entertainment and education but also poses significant challenges. Privacy concerns arise as anyone’s likeness can be co-opted without consent. Security threats emerge from potential misuse in fraud or disinformation campaigns. Perhaps most critically, deepfakes erode trust in digital media, making it increasingly difficult to discern authentic content from fabrications.
AI and Machine Learning Approaches to Detecting Deepfakes
Detecting deepfakes requires sophisticated AI and machine learning (ML) techniques capable of recognizing subtle anomalies in synthetic content. Deepfakes, generated using techniques like Generative Adversarial Networks (GANs), often contain small discrepancies that are difficult for the human eye to detect but can be identified through AI-powered analysis. AI-based detection techniques leverage vast amounts of data and advanced algorithms to learn patterns that differentiate real media from deepfakes.
- Neural Network: One of the primary AI techniques used is neural networks, specifically deep learning models like Convolutional Neural Networks (CNNs). CNNs are widely employed because they can process visual information, analyze patterns, and detect inconsistencies in images and videos. By training on real and fake datasets, CNNs learn to spot irregularities in the pixel structure or frame transitions that signal manipulation. These networks are capable of detecting the subtle differences between a real human face and a digitally altered one, even if the deepfake is highly realistic.
- Facial Movement Analysis: Real faces move naturally, with specific muscle patterns and micro-expressions that deepfake algorithms struggle to replicate perfectly. AI models analyse these facial dynamics, tracking the synchronization between lip movement and audio, eye blinking rates, or slight shifts in facial muscles to detect discrepancies in manipulated videos.
- Audio Inconsistencies: Deepfake videos often feature mismatches between the audio track and the visual content. For instance, AI can pick up on unnatural speech patterns, irregularities in sound frequency, or mismatches between mouth movements and spoken words. These abnormalities can be flagged by AI systems designed to match audio to visuals.
- Pixel-level Analysis: This is another effective AI-driven approach. Deepfake generators, while sophisticated, tend to leave pixel anomalies, particularly at the boundary of manipulated regions (e.g., around the eyes, mouth, or skin texture). AI can detect these pixel-level irregularities, which are often too subtle for human viewers to notice but are indicative of digital tampering.
- AI-Powered Spatial and Temporal Analysis: This can scrutinize video frames for inconsistencies across time. While an individual frame may appear realistic, examining the sequence of frames often reveals inconsistencies in motion or lighting, which deepfakes fail to maintain consistently. These subtle distortions can signal that a video has been digitally altered.
The Arms Race: Deepfake Creation vs. Detection
The battle between deepfake creators and detectors is a continuous arms race. As AI and machine learning tools improve at detecting synthetic media, deepfake creators respond by refining their techniques, making detection increasingly difficult. This cycle of advancement is driven by the dual capabilities of AI, which plays a key role in both the creation and detection of deepfakes.
On one side, deepfake creators leverage technologies like Generative Adversarial Networks (GANs) to produce increasingly sophisticated synthetic media. GANs consist of two neural networks—the generator, which creates fake content, and the discriminator, which attempts to identify real versus fake content. As the discriminator improves, the generator learns to produce even more realistic deepfakes, creating an ever-evolving challenge for detection systems. This feedback loop enables deepfake creators to generate media that are harder to distinguish from authentic content. On the other side, advancements in AI detection technologies prompt creators to innovate further. For example, when facial movement analysis became a popular detection method, deepfake algorithms improved their ability to replicate natural facial dynamics. Similarly, pixel-level analysis of deepfakes spurred creators to enhance image resolution and reduce detectable inconsistencies. As detection techniques evolve, so do the methods of countering them, resulting in a constant tug-of-war.
AI itself is central to both sides of this arms race. The same machine learning models that power detection systems also underpin deepfake generation tools. This dual role of AI presents a unique challenge—while it helps in defending against synthetic media, it also serves as the foundation for producing increasingly convincing deepfakes. The result is a perpetual cycle of creation and detection, where advances on one side directly fuel innovation on the other. This arms race continues to shape the future of media integrity and security.
Key Technologies in Deepfake Detection
The rapid advancement of deepfakes has prompted the development of sophisticated AI and machine learning technologies to detect synthetic content. These technologies harness the power of neural networks, deep learning models, and advanced forensics techniques to identify even the most subtle manipulations. Here’s a detailed look at some of the top technologies used in deepfake detection.
- Convolutional Neural Networks (CNNs): Convolutional Neural Networks (CNNs) are among the most widely used AI tools in deepfake detection. CNNs excel at processing and analyzing visual data, making them ideal for detecting image and video anomalies. By breaking down visual content into smaller pixel-level units, CNNs can identify inconsistencies that are typically invisible to the human eye. For example, they can detect subtle differences in skin texture, lighting, and facial expressions across frames. Trained on massive datasets of real and fake media, CNNs learn to spot even the slightest signs of tampering. Their ability to handle complex visual data makes them central to the detection of deepfake videos.
- GANs for Detecting Deepfakes: Generative Adversarial Networks (GANs), the very technology used to create deepfakes, are also employed in detecting them. GANs consist of two neural networks: a generator that creates fake content and a discriminator that tries to distinguish between real and fake. In detection, GANs are used to reverse-engineer deepfake generation processes by analysing and comparing real content with synthetically produced media. Detection-focused GANs excel at identifying unusual artifacts in deepfake videos, such as inconsistencies in lighting, facial alignment, or audio mismatches.
- Audio-Visual Forensics: Audio-visual forensics integrates AI-driven techniques to analyse both the video and audio components of media. Deepfakes often struggle to perfectly sync voice and facial movements, creating detectable discrepancies. By analysing the synchronization between lip movement and speech, AI algorithms can detect subtle differences that suggest manipulation. Additionally, deepfakes tend to introduce audio artifacts, such as unnatural pauses or pitch irregularities, which can be flagged by forensic tools. This method is especially useful for catching deepfake videos where the speaker’s words and facial movements don’t align naturally.
- Real-World Applications: AI-based deepfake detection technologies have found crucial applications in various fields. In media, news organizations are employing these tools to verify the authenticity of video content before broadcasting. Security agencies use AI to detect deepfakes in surveillance footage or to prevent the spread of disinformation during elections. In law enforcement, deepfake detection helps combat criminal activities like fraud or impersonation by identifying doctored evidence. Social media platforms are increasingly deploying AI-powered detection tools to remove manipulated content, safeguarding user trust.
Challenges in Deepfake Detection
Despite significant advancements, deepfake detection technologies face several challenges that limit their effectiveness. As deepfake creation methods become more sophisticated, the limitations of current AI and machine learning models are increasingly exposed.
- Limitations of AI Models: One major challenge is the inability of AI models to keep pace with the rapid evolution of deepfake techniques. Deepfake generation tools, especially those based on Generative Adversarial Networks (GANs), are constantly improving, making it harder for existing detection models to identify fakes. Additionally, detection tools often rely on massive datasets for training, and deepfake creators can exploit unseen techniques that the AI hasn’t been trained to detect. This means that newer, more advanced deepfakes may bypass even the most advanced detection algorithms.
- Ethical Concerns and Bias: AI detection systems are not immune to biases. Detection algorithms may perform unevenly across different demographics, such as race, gender, or age, leading to false positives or negatives. For instance, facial recognition and detection models have historically struggled with people of colour due to unbalanced training data, which raises concerns about fairness and inclusivity. Ethical questions also arise when it comes to privacy, as detecting deepfakes may require intrusive data collection, such as facial scans or personal audio recordings, which could infringe on individual rights.
- Accessibility and Open-Source Tools: Many advanced deepfake detection tools are developed by large corporations or government agencies, limiting public access. The lack of open-source detection software means that smaller organizations, independent media outlets, and the general public have fewer resources to detect deepfakes. This disparity in access puts underfunded groups at a disadvantage when combating misinformation. The need for more accessible and open-source tools is crucial in ensuring that everyone can participate in the fight against deepfakes and safeguard the integrity of information.
Future Trends: What Lies Ahead?
The future of deepfake detection is set to be shaped by emerging technologies and innovative approaches aimed at staying ahead of increasingly sophisticated synthetic media. As deepfakes evolve, more accurate and reliable detection tools are needed to safeguard the integrity of digital content. Here are some key trends that are likely to shape the future of deepfake detection.
- Advanced AI Tools: One of the most promising trends is the development of more sophisticated AI tools, such as self-supervised learning and transformer models. Unlike traditional deep learning models that require massive datasets, self-supervised models can learn from smaller data samples, making them more adaptable to new and evolving deepfake techniques. Transformer models, which have revolutionized natural language processing, are being adapted to analyse and cross-verify both visual and audio data, improving detection accuracy. These advanced tools will enhance AI’s ability to identify subtle anomalies in deepfakes.
- Blockchain for Decentralized Verification: Blockchain technology offers a novel solution to the deepfake problem through decentralized media verification. By creating immutable records of media content at the point of creation, blockchain can verify the authenticity of images, videos, and audio files as they circulate online. Any alterations to the original content can be detected through the blockchain ledger, ensuring transparency and accountability. This decentralized approach empowers content creators and consumers to verify the integrity of digital media without relying on centralized platforms.
- AI-Based Content Verification: The future will likely see the integration of AI-based content verification systems across social media platforms, news organizations, and security agencies. These systems could operate in real-time, flagging potential deepfakes as they are uploaded or shared. Combined with technologies like digital watermarking, which embeds hidden, tamper-proof identifiers in media, AI-based systems will offer an automated, scalable solution to deepfake detection.
Conclusion
The ongoing battle against deepfakes highlights the crucial role that AI and machine learning play in preserving the integrity of digital media. Through advanced techniques like CNN, GANs, and audio-visual forensics, these technologies enable the detection of subtle manipulations in synthetic media, helping to safeguard trust in what we consume online. However, the continuous arms race between deepfake creators and detectors underscores the need for ongoing innovation. The continued development of AI-driven detection tools is vital to staying ahead of increasingly sophisticated deepfakes. As the technology evolves, so too must our defences. Ensuring the authenticity of digital content is not just a technical challenge but a societal imperative to protect individuals, institutions, and the broader public from the harmful impacts of misinformation and deception in an ever-expanding digital landscape.
Authors Name: Ahmed Olabisi Olajide (Co-founder Eybrids)
LinkedIn: Olabisi Olajide | LinkedIn
Technology
Combating Impersonation Attacks in the Digital Age: Protecting Consumers and Businesses
Combating Impersonation Attacks in the Digital Age: Protecting Consumers and Businesses
By Abuh Ibrahim Sani
As business transactions continue to revolve around technology, trust holds immense value. Consumers trust brands and organizations they engage with to protect their personal and financial records. On the other hand, cybercriminals have improved their skills at taking advantage of this trust with impersonation attacks. The advanced strategies, which include imitating the online personas of respected brands or persons, present significant dangers for businesses and consumers alike. This paper will examine how cybercriminals take advantage of public trust, the characteristics of impersonation attacks, and methods of protecting against the attack.
What is Impersonation Attacks
Impersonation attacks, also referred to as brand impersonation or spoofing, happen when cybercriminals pretend to be trusted entitiesto trick individuals into sharing sensitive information or engaging in harmful activities. Many times, these attacks come in the form of phishing emails, counterfeit websites, or deceptive social media profiles, all created to appear genuine. The main objective is to deceive innocent individuals into revealing sensitive information like passwords or payment details or to carry out unauthorized money transfers.
The effectiveness of these attacks depends on the significant amount of trust individuals have in reputable organizations. Consumers are more inclined to engage with fraudulent communication without questioning its authenticity when they see logos, email addresses, or branding elements they recognize. This renders impersonation attacks highly perilous and successful.
How Cybercriminals Exploit Public Trust
Cybercriminals take advantage of people’s trust in various ways, using a mix of psychological manipulation and technological deceit.Phishing is a widely used type of impersonation attack in which fraudsters send emails that seem to be from reputable businesses. These emails are created to mimic official communications from banks, e-commerce sites, or other reputable organizations. Frequently, they include urgent wording, like alerts regarding account problems or notices of doubtful actions, spurring recipients to take fast action. The victim could be instructed in the email to click on a link that will take them to a fraudulent website requesting sensitive information.
Cybercriminals often make websites that look very similar to the official websites of popular brands. These websites might employ domain names that are very similar to the authentic ones, typically with slight differences such as additional letters or subtle misspellings (e.g., “amaz0n.com” instead of “amazon.com”). These fraudulent websites are frequently utilized to gather login details, pilfer credit card data, or disseminate malware to the victim’s device.
With the growing dependence of businesses on social media for engaging with customers, cybercriminals are now using these platforms to execute impersonation attacks. Scammers make fraudulent accounts that imitate the branding and messaging of reputable companies. These accounts could be utilized to share harmful links, advertise fraudulent contests, or request personal details from their followers. Since many users rely on brands’ verified accounts on social media, they might not realize when they are interacting with a fake profile.
Impact of Impersonation Attacks on Brands and Consumer
Impersonation attacks can cause severe harm to businesses as well as their customers. For businesses, these attacks damage consumer trust, harm their reputation, and could lead to legal consequences. If a company’s name is linked to deceitful actions, customers might view it as a betrayal, resulting in decreased profits and lasting harm to the brand.
Consumers who fall prey to impersonation attacks may suffer financial losses, identity theft, and compromised personal data. The emotional repercussions of being lied to are often just as damaging as the financial effects in numerous instances.
Protecting Against Impersonation Attack
Protecting against impersonation attacks necessitates utilizing a variety of methods, including technical defenses and raising awareness among the public. Companies need to be proactive in protecting their brandand consumers must be informed on how to identify and steer clear of scams.It is recommended that organizations adopt email authentication protocols like DMARC, SPF, and DKIM. These technologies assist in confirming that emails purportedly originating from a company’s domain are authentic, decreasing the chances of phishing emails ending up in customers’ email inboxes.
Businesses need to consistently check the internet for fraudulent domains or websites imitating their brand. This can be achieved by utilizing domain monitoring services that keep tabs on different versions of the company’s name and notify them of possible risks. Upon discovery of counterfeit websites, businesses should promptly initiate legal proceedings to have them removed. Monitor social media platforms closely to identify any fake accounts that may be using their name or image. It is crucial to report these accounts to the platform quickly to prevent them from being used in impersonation attacks.
Furthermore, businesses have the option to request verified status on their social media platforms, making it easier for users to recognize legitimate accounts.One example is phishing awareness training, which can assist employees in identifying fraudulent emails and preventing being deceived by them. In the same way, businesses can utilize public communication platforms to educate customers about typical scams, like recommending double-check URLs or reaching out to the company directly when they think a communication may be fraudulent.
Response to an Event of Emergency Situations
In case of an impersonation attack, it is crucial to have a clearly outlined incident response plan. This plan needs to involve informing impacted customers, offering advice on safeguarding their accounts and collaborating with cybersecurity experts to manage the breach. Being transparent and communicating quickly can reduce harm to the company’s reputation and rebuild public confidence.
Conclusion
Impersonation attacks are a danger to both brands and their customers, using trust to trick and scam unsuspecting victims. Nevertheless, through the utilization of strong security protocols and promoting knowledge, companies can shield themselves from these strategies and safeguard the reputation of their brand. Protecting public trust in the digital era demands vigilance, education, and technological resilience as it is a valuable asset.
Technology
Understanding the Role of Psychological Warfare in Cybersecurity: Protecting Against Ransomware Attacks
By Abuh Ibrahim Sani
Amidst the technological age teeming with chatter about system loopholes and digital breaches, psychological warfare is rearing its head as a potent arsenal. Cybercriminals, especially those orchestrating ransomware offensives, deftly wield tactics to rattle electronic frameworks and manipulate the human mind. This piece delves into the dynamics of psychological warfare in the digital security realm, spotlighting ransomware onslaughts, while also offering strategies to counteract these maneuvers.
The Emergence of Ransomware
Ransomware, a form of malevolent software, seizes files or paralyzes a computer until a payment is rendered. It has evolved from mere blackmail into a complex criminal network. The emergence of “Ransomware-as-a-service (RaaS)” has lowered the entry bar, allowing even tech novices to initiate assaults, while cryptocurrencies provide a veil for untraceable payments. Yet, as these schemes grow technically intricate, they increasingly employ psychological manipulation to intimidate victims into handing over ransoms.
Psychological Manipulation in Ransomware Attacks
Hackers who take advantage of mental deficiencies sometimes aim to instil panic, bewilderment, hurry, and even guilt in their victims to force them to pay a ransom. Some of the main strategies used in psychology:
- Fear and Intimidation
Attackers use the victim’s fear of losing crucial data or damaging their reputation. After encrypting the files, they frequently display intimidating messages emphasizing the serious consequences of noncompliance, such as permanent data loss or the public disclosure of sensitive information. The countdown timers associated with ransom demands amplify this fear by instilling a sense of approaching doom.
- Urgency and Timer Pressure
Ransomware attackers often use a ticking clock to drive victims to make fast decisions. Setting a short payment deadline creates a sense of urgency that outweighs sound thinking. Victims are presented in uncomfortable situations in which they must decide whether to pay or risk losing their data permanently. The limited decision window is intended to prevent the victim from looking.
- Exploitation of Responsibility and Guilt
Cybercriminals may tailor their communication to capitalize on emotions associated with individual accountability, especially in commercial settings. They may signal that failing to respond quickly would cause harm to the firm, financial injury, or consumer damage, making the individual feel responsible for any negative outcomes.
- Disruption and Chaos
The initial impact of a ransomware assault is often meant to be confusing. Systems fail, files become inaccessible, and routine business operations cease. The abrupt disruption of normal activities produces confusion and alarm, allowing attackers to manipulate emotions and incentivize collaboration through ransom demands.
- The promise of restoration
Reverting to a state of normalcy holds a vast allure for the psyche. Assailants assume the guise of saviours, claiming exclusive prowess to revert the afflicted device to its unbreached condition. They hijack the victim’s data, pledging its liberation post-payment. The urge for straightforward fixes can prompt individuals to consent blindly, oblivious to the broader perils.
Defend Against Ransomware’s Psychological Attacks
Safeguarding against ransomware demands more than firewalls, antivirus programs, and data encryption. It’s imperative to grasp the mental strategies employed and emphasize fortifying your resilience against them. Here are key methods to thwart ransomware assaults that exploit psychological manipulation:
- Preparedness and Incident Response Plans
Reducing the psychological effects of ransomware requires the development and execution of an extensive incident response strategy. Establishing clear protocols for handling cyberattacks aids in reducing disarray and anxiety among affected parties. In a crisis, having a well-organized strategy in place can assist decision-makers and staff to remain composed and make better choices.
- Deliberate Communication
Coordinated, calm, and transparent internal and external communication is crucial during a ransomware attack. Establishing crisis communication procedures in advance will help stakeholders, employees, and clients stay informed and calm. Open and prompt communication counteracts the attackers’ use of haste and terror.
Establishing Time Delays and Decision Protocols
Organizations might impose rules mandating multiple levels of scrutiny before deciding whether to approve a payment to thwart the “urgency” tactic. Companies should refrain from acting out of fear by imposing time delays or requiring legal, financial, and security expert consultation. This acts as a significant deterrent to the psychological pressure that aggressors generate.
- Cybersecurity and Law Enforcement Professionals
This is usually because ransomware perpetrators manipulate their victims’ minds into thinking they can handle the problem on their own. Nonetheless, it might be quite advantageous to involve cybersecurity experts and law enforcement. In addition to limiting the damage, experts can help restore affected equipment and offer alternatives to paying the ransom.
- Backup and Recovery Systems
Removing the ransomware perpetrators’ power over victims is one of the best methods to undermine their psychological strategies. The dread of permanent data disappearance can be significantly diminished by consistently creating backups of crucial information and keeping them in a secure, offline spot. The mental edge possessed by cyber intruders vanishes when ransomware strikes, as those affected can revive their systems from these backups, sidestepping any ransom payments.
- Training and Awareness
Employees receiving frequent cybersecurity education are less prone to fall for mind games. Staff should be instructed on both ransomware threats and the emotional tactics used by cyber villains. Those who grasp the psychological facets of an assault will keep calm and avoid hasty choices when pressured.
Developing Mental Hardiness
In the fight against ransomware, psychological toughness is equally as important as technical resistance. companies must establish a resilient culture by prioritizing preparedness, clarity of thought, and teamwork. Businesses that prioritize mental toughness and organized emergency response strategies are better equipped to thwart ransomware attackers’ emotional manipulation.
Conclusion
The arena of cybersecurity is not solely anchored in tech wizardry; it’s a realm where psychology plays a pivotal role. Those wielding ransomware have honed their craft, preying on human anxieties, urgency, and bewilderment to coerce compliance. To outsmart these threats, one requires a mix of tech acumen and psychological resilience. Establishing incident response strategies, ongoing education, and robust backup protocols empowers organizations to counteract the mental ploys of hackers and shield their precious data. Cybersecurity extends beyond tech defense; it’s about safeguarding individuals against cunning manipulation.
Technology
NITDA urges users of LiteSpeed Cache plugin for WordPress to update
The National Information Technology Development Agency (NITDA) has called on users of the LiteSpeed Cache plugin for WordPress, to update to the latest version, (6.4.1), to prevent their websites from being attacked.
Mrs Hadiza Umar, Director, Corporate Affairs and External Relations at NITDA, said this in a statement in Abuja on Monday.
LiteSpeed Cache for WordPress (LSCWP) is an all-in-one site acceleration plugin, featuring an exclusive server-level cache and a collection of optimisation features.
Umar said that a critical security vulnerability (CVE-2024-28000) had been discovered in the LSCWP, affecting over five million websites.
“This vulnerability allows attackers to take complete control of a website without requiring any authentication.
“The vulnerability is due to a flaw in the plugin’s role simulation feature and if exploited, an attacker can manipulate this flaw to gain administrative access to the website.
“This could lead to the installation of malicious plugins, theft of data, or even redirection of site visitors to harmful websites.
“Website administrators using the LiteSpeed Cache plugin are strongly advised to update to the latest version (6.4.1) immediately,” she said.
She noted that the simplicity of the attack vector, combined with a weak hash function, made it easy for attackers to exploit this vulnerability by guessing via brute-forcing or exploiting exposed debug logs.
According to her, to check for updates, log in to your WordPress dashboard and navigate to the Plugins section, where you can update the LiteSpeed Cache plugin.
“As a precautionary measure, administrators should ensure that debugging is disabled on live websites and regularly audit their plugin settings to prevent vulnerabilities from being exploited,” Umar said.
-
Business3 years ago
Facebook, Instagram Temporarily Allow Posts on Ukraine War Calling for Violence Against Invading Russians or Putin’s Death
-
Headlines3 years ago
Nigeria, Other West African Countries Facing Worst Food Crisis in 10 Years, Aid Groups Say
-
Foreign2 years ago
New York Consulate installs machines for 10-year passport
-
Technology2 months ago
Zero Trust Architecture in a Remote World: Securing the New Normal
-
Entertainment2 years ago
Phyna emerges winner of Big Brother Naija Season 7
-
Business5 months ago
Nigeria Customs modernisation project to check extortion of traders
-
Business8 months ago
We generated N30.2 bn revenue in three months – Kano NCS Comptroller
-
Headlines5 months ago
Philippines’ Vice President Sara Duterte resigns from Cabinet