Technology
The Threat of Deepfakes: AI and ML in the Fight Against Synthetic Media
The Threat of Deepfakes: AI and ML in the Fight Against Synthetic Media
Deepfakes, a type of synthetic media produced using artificial intelligence have lately been somewhat well-known due to their incredible realism in audio, video, and picture manipulation. Deepfakes, which are now used for public dishonesty, false information distribution, and reputation damage, have progressed from a benign form of online amusement to a major cause of worry. Their increasing complexity makes it more difficult to separate actual from fake information. To keep people’s trust in news sources in today’s fast-paced, social media-driven media environment, deepfakes must be identified and prevented. Given the prospective disruption of political processes, effects on financial markets, and erosion of public trust brought about by deepfakes, effective detection techniques are important. Artificial intelligence (AI) and machine learning (ML) are used to neutralise this threat.
AI driven systems are advancing in their capacity to detect deepfakes using pattern, anomaly, and deviation analysis of audiovisual data. AI uses deep learning and neural networks to enable the recognition of changing material, therefore protecting digital platforms from the dangers of synthetic content. This article covers the significance of artificial intelligence and machine learning in the battle against deepfakes as well as the continuous developments of these technologies.
The Evolution of Deepfakes
From simple picture editing to complex film and audio creation, deepfakes have developed over time. Though the technique originated in early work on face recognition in the 1990s, the word “deepfake” did not initially come up until 2017. Deepfakes might overcome its initial limitation (that of face-swapping in still images) partly due to developments in machine learning, especially Generative Adversarial Networks (GANs). Using a generator of phoney material and a discriminator, the GANs introduced in 2014 aim to distinguish between real and fake. This competitive process continually improves the quality of contents that are generated. By 2018, deepfake technology had progressed to creating convincing video and audio, sparking both awe and concern.
Today, deepfakes can manipulate facial expressions, lip movements, and voice in real-time, blurring the line between reality and fiction. This technology has found applications in entertainment and education but also poses significant challenges. Privacy concerns arise as anyone’s likeness can be co-opted without consent. Security threats emerge from potential misuse in fraud or disinformation campaigns. Perhaps most critically, deepfakes erode trust in digital media, making it increasingly difficult to discern authentic content from fabrications.
AI and Machine Learning Approaches to Detecting Deepfakes
Detecting deepfakes requires sophisticated AI and machine learning (ML) techniques capable of recognizing subtle anomalies in synthetic content. Deepfakes, generated using techniques like Generative Adversarial Networks (GANs), often contain small discrepancies that are difficult for the human eye to detect but can be identified through AI-powered analysis. AI-based detection techniques leverage vast amounts of data and advanced algorithms to learn patterns that differentiate real media from deepfakes.
- Neural Network: One of the primary AI techniques used is neural networks, specifically deep learning models like Convolutional Neural Networks (CNNs). CNNs are widely employed because they can process visual information, analyze patterns, and detect inconsistencies in images and videos. By training on real and fake datasets, CNNs learn to spot irregularities in the pixel structure or frame transitions that signal manipulation. These networks are capable of detecting the subtle differences between a real human face and a digitally altered one, even if the deepfake is highly realistic.
- Facial Movement Analysis: Real faces move naturally, with specific muscle patterns and micro-expressions that deepfake algorithms struggle to replicate perfectly. AI models analyse these facial dynamics, tracking the synchronization between lip movement and audio, eye blinking rates, or slight shifts in facial muscles to detect discrepancies in manipulated videos.
- Audio Inconsistencies: Deepfake videos often feature mismatches between the audio track and the visual content. For instance, AI can pick up on unnatural speech patterns, irregularities in sound frequency, or mismatches between mouth movements and spoken words. These abnormalities can be flagged by AI systems designed to match audio to visuals.
- Pixel-level Analysis: This is another effective AI-driven approach. Deepfake generators, while sophisticated, tend to leave pixel anomalies, particularly at the boundary of manipulated regions (e.g., around the eyes, mouth, or skin texture). AI can detect these pixel-level irregularities, which are often too subtle for human viewers to notice but are indicative of digital tampering.
- AI-Powered Spatial and Temporal Analysis: This can scrutinize video frames for inconsistencies across time. While an individual frame may appear realistic, examining the sequence of frames often reveals inconsistencies in motion or lighting, which deepfakes fail to maintain consistently. These subtle distortions can signal that a video has been digitally altered.
The Arms Race: Deepfake Creation vs. Detection
The battle between deepfake creators and detectors is a continuous arms race. As AI and machine learning tools improve at detecting synthetic media, deepfake creators respond by refining their techniques, making detection increasingly difficult. This cycle of advancement is driven by the dual capabilities of AI, which plays a key role in both the creation and detection of deepfakes.
On one side, deepfake creators leverage technologies like Generative Adversarial Networks (GANs) to produce increasingly sophisticated synthetic media. GANs consist of two neural networks—the generator, which creates fake content, and the discriminator, which attempts to identify real versus fake content. As the discriminator improves, the generator learns to produce even more realistic deepfakes, creating an ever-evolving challenge for detection systems. This feedback loop enables deepfake creators to generate media that are harder to distinguish from authentic content. On the other side, advancements in AI detection technologies prompt creators to innovate further. For example, when facial movement analysis became a popular detection method, deepfake algorithms improved their ability to replicate natural facial dynamics. Similarly, pixel-level analysis of deepfakes spurred creators to enhance image resolution and reduce detectable inconsistencies. As detection techniques evolve, so do the methods of countering them, resulting in a constant tug-of-war.
AI itself is central to both sides of this arms race. The same machine learning models that power detection systems also underpin deepfake generation tools. This dual role of AI presents a unique challenge—while it helps in defending against synthetic media, it also serves as the foundation for producing increasingly convincing deepfakes. The result is a perpetual cycle of creation and detection, where advances on one side directly fuel innovation on the other. This arms race continues to shape the future of media integrity and security.
Key Technologies in Deepfake Detection
The rapid advancement of deepfakes has prompted the development of sophisticated AI and machine learning technologies to detect synthetic content. These technologies harness the power of neural networks, deep learning models, and advanced forensics techniques to identify even the most subtle manipulations. Here’s a detailed look at some of the top technologies used in deepfake detection.
- Convolutional Neural Networks (CNNs): Convolutional Neural Networks (CNNs) are among the most widely used AI tools in deepfake detection. CNNs excel at processing and analyzing visual data, making them ideal for detecting image and video anomalies. By breaking down visual content into smaller pixel-level units, CNNs can identify inconsistencies that are typically invisible to the human eye. For example, they can detect subtle differences in skin texture, lighting, and facial expressions across frames. Trained on massive datasets of real and fake media, CNNs learn to spot even the slightest signs of tampering. Their ability to handle complex visual data makes them central to the detection of deepfake videos.
- GANs for Detecting Deepfakes: Generative Adversarial Networks (GANs), the very technology used to create deepfakes, are also employed in detecting them. GANs consist of two neural networks: a generator that creates fake content and a discriminator that tries to distinguish between real and fake. In detection, GANs are used to reverse-engineer deepfake generation processes by analysing and comparing real content with synthetically produced media. Detection-focused GANs excel at identifying unusual artifacts in deepfake videos, such as inconsistencies in lighting, facial alignment, or audio mismatches.
- Audio-Visual Forensics: Audio-visual forensics integrates AI-driven techniques to analyse both the video and audio components of media. Deepfakes often struggle to perfectly sync voice and facial movements, creating detectable discrepancies. By analysing the synchronization between lip movement and speech, AI algorithms can detect subtle differences that suggest manipulation. Additionally, deepfakes tend to introduce audio artifacts, such as unnatural pauses or pitch irregularities, which can be flagged by forensic tools. This method is especially useful for catching deepfake videos where the speaker’s words and facial movements don’t align naturally.
- Real-World Applications: AI-based deepfake detection technologies have found crucial applications in various fields. In media, news organizations are employing these tools to verify the authenticity of video content before broadcasting. Security agencies use AI to detect deepfakes in surveillance footage or to prevent the spread of disinformation during elections. In law enforcement, deepfake detection helps combat criminal activities like fraud or impersonation by identifying doctored evidence. Social media platforms are increasingly deploying AI-powered detection tools to remove manipulated content, safeguarding user trust.
Challenges in Deepfake Detection
Despite significant advancements, deepfake detection technologies face several challenges that limit their effectiveness. As deepfake creation methods become more sophisticated, the limitations of current AI and machine learning models are increasingly exposed.
- Limitations of AI Models: One major challenge is the inability of AI models to keep pace with the rapid evolution of deepfake techniques. Deepfake generation tools, especially those based on Generative Adversarial Networks (GANs), are constantly improving, making it harder for existing detection models to identify fakes. Additionally, detection tools often rely on massive datasets for training, and deepfake creators can exploit unseen techniques that the AI hasn’t been trained to detect. This means that newer, more advanced deepfakes may bypass even the most advanced detection algorithms.
- Ethical Concerns and Bias: AI detection systems are not immune to biases. Detection algorithms may perform unevenly across different demographics, such as race, gender, or age, leading to false positives or negatives. For instance, facial recognition and detection models have historically struggled with people of colour due to unbalanced training data, which raises concerns about fairness and inclusivity. Ethical questions also arise when it comes to privacy, as detecting deepfakes may require intrusive data collection, such as facial scans or personal audio recordings, which could infringe on individual rights.
- Accessibility and Open-Source Tools: Many advanced deepfake detection tools are developed by large corporations or government agencies, limiting public access. The lack of open-source detection software means that smaller organizations, independent media outlets, and the general public have fewer resources to detect deepfakes. This disparity in access puts underfunded groups at a disadvantage when combating misinformation. The need for more accessible and open-source tools is crucial in ensuring that everyone can participate in the fight against deepfakes and safeguard the integrity of information.
Future Trends: What Lies Ahead?
The future of deepfake detection is set to be shaped by emerging technologies and innovative approaches aimed at staying ahead of increasingly sophisticated synthetic media. As deepfakes evolve, more accurate and reliable detection tools are needed to safeguard the integrity of digital content. Here are some key trends that are likely to shape the future of deepfake detection.
- Advanced AI Tools: One of the most promising trends is the development of more sophisticated AI tools, such as self-supervised learning and transformer models. Unlike traditional deep learning models that require massive datasets, self-supervised models can learn from smaller data samples, making them more adaptable to new and evolving deepfake techniques. Transformer models, which have revolutionized natural language processing, are being adapted to analyse and cross-verify both visual and audio data, improving detection accuracy. These advanced tools will enhance AI’s ability to identify subtle anomalies in deepfakes.
- Blockchain for Decentralized Verification: Blockchain technology offers a novel solution to the deepfake problem through decentralized media verification. By creating immutable records of media content at the point of creation, blockchain can verify the authenticity of images, videos, and audio files as they circulate online. Any alterations to the original content can be detected through the blockchain ledger, ensuring transparency and accountability. This decentralized approach empowers content creators and consumers to verify the integrity of digital media without relying on centralized platforms.
- AI-Based Content Verification: The future will likely see the integration of AI-based content verification systems across social media platforms, news organizations, and security agencies. These systems could operate in real-time, flagging potential deepfakes as they are uploaded or shared. Combined with technologies like digital watermarking, which embeds hidden, tamper-proof identifiers in media, AI-based systems will offer an automated, scalable solution to deepfake detection.
Conclusion
The ongoing battle against deepfakes highlights the crucial role that AI and machine learning play in preserving the integrity of digital media. Through advanced techniques like CNN, GANs, and audio-visual forensics, these technologies enable the detection of subtle manipulations in synthetic media, helping to safeguard trust in what we consume online. However, the continuous arms race between deepfake creators and detectors underscores the need for ongoing innovation. The continued development of AI-driven detection tools is vital to staying ahead of increasingly sophisticated deepfakes. As the technology evolves, so too must our defences. Ensuring the authenticity of digital content is not just a technical challenge but a societal imperative to protect individuals, institutions, and the broader public from the harmful impacts of misinformation and deception in an ever-expanding digital landscape.
Authors Name: Ahmed Olabisi Olajide (Co-founder Eybrids)
LinkedIn: Olabisi Olajide | LinkedIn
Technology
NITDA urges users of LiteSpeed Cache plugin for WordPress to update
The National Information Technology Development Agency (NITDA) has called on users of the LiteSpeed Cache plugin for WordPress, to update to the latest version, (6.4.1), to prevent their websites from being attacked.
Mrs Hadiza Umar, Director, Corporate Affairs and External Relations at NITDA, said this in a statement in Abuja on Monday.
LiteSpeed Cache for WordPress (LSCWP) is an all-in-one site acceleration plugin, featuring an exclusive server-level cache and a collection of optimisation features.
Umar said that a critical security vulnerability (CVE-2024-28000) had been discovered in the LSCWP, affecting over five million websites.
“This vulnerability allows attackers to take complete control of a website without requiring any authentication.
“The vulnerability is due to a flaw in the plugin’s role simulation feature and if exploited, an attacker can manipulate this flaw to gain administrative access to the website.
“This could lead to the installation of malicious plugins, theft of data, or even redirection of site visitors to harmful websites.
“Website administrators using the LiteSpeed Cache plugin are strongly advised to update to the latest version (6.4.1) immediately,” she said.
She noted that the simplicity of the attack vector, combined with a weak hash function, made it easy for attackers to exploit this vulnerability by guessing via brute-forcing or exploiting exposed debug logs.
According to her, to check for updates, log in to your WordPress dashboard and navigate to the Plugins section, where you can update the LiteSpeed Cache plugin.
“As a precautionary measure, administrators should ensure that debugging is disabled on live websites and regularly audit their plugin settings to prevent vulnerabilities from being exploited,” Umar said.
Technology
NITDA DG Showcases Nigeria’s Digital Transformation at 79th UN General Assembly
The Director General of the National Information Technology Development Agency (NITDA), Kashifu Inuwa Abdullahi CCIE, made significant contributions to discussions on Africa’s digital transformation at the 79th session of the United Nations General Assembly (UNGA 79) in New York.
Participating in a series of high-profile events, the DG emphasized Nigeria’s efforts in digital transformation, cybersecurity, and public-private partnerships.
At a panel titled “Digital Transformation in Africa: Jumping Ten Years in One,” hosted by the Consulate of Denmark in New York, the UNDP, and cBrain, Abdullahi highlighted how government-led initiatives, private-sector collaboration, and digital technologies can drive Africa’s future growth.
He shared insights from NITDA’s own digital initiatives, offering valuable lessons for other African nations in their pursuit of sustainable digital growth. The panel underscored Africa’s role as the next global workforce frontier.
Also, during the “Summit of the Future” held at the UN headquarters, the DG participated in a session focused on “The Power of the Commons: Digital Public Goods for a More Secure, Inclusive, and Resilient World.”
The panel explored the importance of Digital Public Goods (DPGs) and Digital Public Infrastructure (DPI) in social and economic development. The NITDA boss emphasized the crucial role of academia in advancing digital commons and stressed the significance of safeguarding digital infrastructure for global security.
In a bilateral meeting with Doreen Bogdan-Martin, Secretary-General of the International Telecommunication Union (ITU), Abdullahi also discussed strategic collaboration in capacity building, cybersecurity, and digital transformation. He highlighted NITDA’s efforts to align with ITU’s goals in fostering Nigeria’s IT development, highlighting the importance of international cooperation in achieving mutual digital growth objectives.
On the sidelines of the assembly, the NITDA DG met with Cisco Vice President Fran Katsoudas to review the progress of the Cisco Country Digital Acceleration (CDA) programme. The CDA programme is a key initiative in Nigeria’s digital transformation, aiming to stimulate economic growth and promote innovation. The DG also explored potential collaborations in enhancing cybersecurity to support Nigeria’s long-term development goals under President Bola Ahmed Tinubu’s Renewed Hope Agenda.
Additionally, the NITDA boss was a key participant in the launch of the Universal Digital Public Infrastructure (DPI) Safeguards Framework. This new framework offers guidelines for the design and implementation of DPI, prioritising public interest and promoting safe, inclusive, and interoperable digital infrastructure. He shared Nigeria’s experience in building a resilient and secure DPI, reinforcing the nation’s commitment to leveraging technology for sustainable development.
Through these engagements, the NITDA DG showcased Nigeria’s leadership in digital transformation, positioning the country as a key player in Africa’s digital future.
Technology
Data Protection and People’s Rights Under Nigeria’s Data Protection Regulations (NDPR): Know Your Rights
Data Protection and People’s Rights Under Nigeria’s Data Protection Regulations (NDPR): Know Your Rights
In a time where private information is more and more important and at risk of being exploited, safeguarding people’s privacy is now a significant priority. The implementation of the Nigeria Data Protection Regulation (NDPR) in 2019 in Nigeria is a major move in protecting citizens’ personal information and ensuring organizations follow legal and ethical guidelines when handling data. As the number of Nigerians participating in digital activities like online banking, e-commerce, and social media increases, the NDPR is fundamental in influencing the collection, processing, and protection of data. This article examines the main provisions of the NDPR, the privileges it provides to people, and its influence on companies and the digital environment in Nigeria.
What Is NDPR?
The National Information Technology Development Agency (NITDA) introduced the Nigeria Data Protection Regulation (NDPR) in January 2019. The NDPR was created to tackle the increasing concerns about personal data misuse in both private and public sectors. It is in line with worldwide data protection trends, like the European Union’s General Data Protection Regulation (GDPR), while also meeting the unique requirements of Nigeria’s digital environment.
The goal of the regulation is to safeguard Nigerian citizens’ data from unauthorized access, exposure, or exploitation. It includes a range of industries like finance, telecom, education, health, and online shopping, which commonly involve gathering and handling personal data.
Key Provisions of the NDPR
The NDPR outlines specific guidelines on how organizations should handle personal data. Some of the provision as outlines in NDPR guidelines are:
Data Collection and Consent: Organizations must obtain explicit consent from individuals before collecting their personal data. This ensures that data subjects are fully aware of what information is being collected, the purpose of its collection, and how it will be used.
Data Processing: The regulation mandates that personal data should only be processed for legitimate and specified purposes. Organizations must ensure that the data is accurate and kept up to date. Processing personal data for purposes other than those originally specified is not permitted without further consent from the individual.
Data Security: One of the core elements of the NDPR is the requirement for organizations to implement adequate security measures to protect personal data. This includes safeguarding data from unauthorized access, data breaches, or any form of manipulation.
Third-Party Sharing: If personal data is to be shared with third parties, the organization must inform the data subject and obtain their consent. The third party must also adhere to the same level of data protection as stipulated by the NDPR.
Data Breach Notifications: In the event of a data breach, organizations are required to notify the affected individuals and NITDA within a specified period. This provision ensures that individuals can take action to mitigate the effects of a breach.
People’s Rights Under The NDPR
The acknowledgement of people’s rights regarding their personal data is a key aspect of the NDPR. The rule gives Nigerians various rights to manage how their data is treated. Some of the right are:
- Right to be Informed: Individuals have the right to be informed about the collection and use of their personal data. Organizations are required to provide transparent information on the types of data collected, the purpose of the collection, and how long the data will be retained.
- Right to Access: Data subjects have the right to request access to their personal data held by an organization. This means they can inquire about the specific data collected, the reasons for its collection, and whether it has been shared with third parties.
- Right to Rectification: If an individual’s personal data is inaccurate or incomplete, they have the right to request that the organization correct or update the information.
- Right to Erasure (Right to be Forgotten): Under certain circumstances, individuals can request that their personal data be deleted. This is particularly relevant if the data is no longer necessary for the purpose it was originally collected or if the individual withdraws their consent for its processing.
- Right to Data Portability: This allows individuals to obtain and reuse their personal data across different services. They have the right to request that their data be transferred from one service provider to another in a commonly used, machine-readable format.
- Right to Object: Individuals have the right to object to the processing of their personal data in cases where the processing is based on legitimate interests or public tasks, direct marketing, or scientific/historical research.
Rights Of Individuals In Cases Of Data Misuse, Breaches, Or Use Without Consent
The NDPR grants the data subject particular rights and solutions if their data is mismanaged, disclosed, or utilized without authorization. These rights give individuals the ability to find a solution and shield themselves from additional damage. Some important rights in such situations include:
- Right to lodge a complaint:
According to Section 3.1.1(e) of the NDPR, individuals have the option to file a complaint with NITDA or other authorized regulatory entities if they suspect their data has been mishandled, processed illegally, or exposed. This privilege allows people to seek legal recourse in cases of mishandling of their information by a company.
- Right to Compensation
The NDPR acknowledges the entitlement to receive compensation for harm caused by data breaches or unauthorized data handling. Individuals can request compensation from the data controller under section 2.10 of the NDPR if they can prove that their data rights violation resulted in harm. This clause guarantees that individuals affected by data breaches can receive compensation for any financial losses, emotional distress, or harm to their reputation.
- Right to withdraw consent
Individuals can revoke their consent for the processing of their personal data whenever they choose. As per Section 2.8 of the NDPR, organizations must respect these requests and stop processing the individual’s data unless there are strong legitimate reasons for the processing. This right is important when data is utilized without permission, enabling individuals to take back control of their personal information.
- Right to Data Erasure
If personal data is breached or used without authorization, individuals have the right to request erasure. According to Section 3.1.2(f) of the NDPR, individuals have the right to ask for the deletion of their personal data if it has been used without permission or if the reason for collecting the data is no longer valid. This right, sometimes referred to as the “right to be forgotten,” guarantees that unauthorized data use is stopped and eliminated from any future handling.
- Right to Restriction of Processing
If someone believes their data has been mishandled or misused, they can ask for processing restrictions under Section 2.10.2. This right enables people to halt additional data processing during ongoing investigations. It serves as a protection, making sure no additional damage occurs during the resolution of the problem.
Benefits To Individuals
When individuals’ rights are breached under the NDPR, they are eligible for certain benefits.
- Reclaiming Privacy: Through exercising the right to be forgotten or limiting additional data processing, individuals can take back authority over their personal information and reduce the consequences of its unauthorized exploitation.
- Financial Compensation: If individuals experience financial loss or emotional distress due to a data breach or misuse, they have the right to request financial compensation from the organization at fault. This serves as a deterrent for careless data handlers and compensates for the damages they cause.
- Legal Remedy: By utilizing the NDPR’s complaint procedures and regulatory supervision, people have the opportunity to take legal measures or regulatory actions to hold those responsible for data misuse or breaches accountable.
- Public Trust: The NDPR’s protections promote trust in the digital world, inspiring people to engage in online activities knowing their data rights are secure.
Compliance Requirements For Organizations
In order to comply with the NDPR, organizations must meet various obligations related to compliance. Some of these items are:
- Appointment of Data Protection Officers (DPOs): Organizations that process a large volume of personal data must appoint a DPO to oversee compliance with the NDPR and ensure the organization’s data practices are in line with the regulation.
- Annual Data Protection Audit: Organizations are required to conduct annual data protection audits and submit the reports to NITDA. This process helps organizations identify potential risks and ensure that they are taking the necessary steps to protect personal data.
- Fines for Non-Compliance: Failure to comply with the NDPR can result in significant penalties, including fines of up to 10 million Naira or 2% of an organization’s annual revenue, depending on the nature and severity of the breach.
Challenges and Gaps in NDPR Implementation
Even though the NDPR has created a strong foundation for safeguarding data in Nigeria, there are still obstacles in its execution. An important obstacle is the lack of public awareness and law enforcement. A large number of Nigerian citizens are still not completely informed about their data rights or the responsibilities that organizations have under the NDPR. Raising public education and awareness is essential in order to give citizens the power to safeguard their privacy.
Another difficulty that must be addressed is ensuring compliance. While NITDA has made progress in encouraging adherence, there are doubts about the agency’s ability to ensure proper enforcement of regulations, especially with major international companies, government agencies and smaller domestic enterprises.
Conclusions
The NDPR in Nigeria sets up rules for data protection and gives individuals rights to safeguard their personal information. The regulation offers various solutions, such as compensation and erasure rights, in situations where there is data misuse, breaches, or unauthorized processing. These safeguards are essential for establishing confidence in Nigeria’s fast-developing digital economy and guaranteeing the preservation of privacy in the era of digital technology. As the public becomes more aware of their data rights and enforcement becomes more rigorous, the NDPR will remain vital in influencing Nigeria’s digital future.
Written By Ibrahim Abuh Sani, Co-Founder, Eybrids.
-
Business3 years ago
Facebook, Instagram Temporarily Allow Posts on Ukraine War Calling for Violence Against Invading Russians or Putin’s Death
-
Headlines3 years ago
Nigeria, Other West African Countries Facing Worst Food Crisis in 10 Years, Aid Groups Say
-
Foreign2 years ago
New York Consulate installs machines for 10-year passport
-
Technology3 weeks ago
Zero Trust Architecture in a Remote World: Securing the New Normal
-
Entertainment2 years ago
Phyna emerges winner of Big Brother Naija Season 7
-
Business6 months ago
We generated N30.2 bn revenue in three months – Kano NCS Comptroller
-
Business4 months ago
Nigeria Customs modernisation project to check extortion of traders
-
Headlines4 months ago
Philippines’ Vice President Sara Duterte resigns from Cabinet