Supervised Learning: The Foundation of Predictive ModelingImage by Author Editor’s note: This article is a part of our series on visualizing the foundations of machine learning. Welcome to the latest entry in our series on visualizing the foundations of machine learning. In this series, we will aim to break down important and often complex technical concepts into intuitive, visual guides to help you master the core principles of the field. This entry focuses on supervised learning, the foundation of predictive modeling. The Foundation of Predictive Modeling Supervised learning is widely regarded as the foundation of predictive modeling in machine learning. But why? At its core, it is a learning paradigm in which a model is trained on labeled data — examples where both the input features and the correct outputs (ground truth) are known. By learning from these labeled examples, the model can make accurate predictions on new, unseen data. A helpful way to understand supervised learning is through the analogy of learning with a teacher. During training, the model is shown examples along with the correct answers, much like a student receiving guidance and correction from an instructor. Each prediction the model makes is compared to the ground truth label, feedback is provided, and adjustments are made to reduce future mistakes. Over time, this guided process helps the model internalize the relationship between inputs and outputs. The objective of supervised learning is to learn a reliable mapping from features to labels. This process revolves around three essential components: First is the training data, which consists of labeled examples and serves as the foundation for learning Second is the learning algorithm, which iteratively adjusts model parameters to minimize prediction error on the training data Finally, the trained model emerges from this process, capable of generalizing what it has learned to make predictions on new data Supervised learning problems generally fall into two major categories: Regression tasks focus on predicting continuous values, such as house prices or temperature readings; Classification tasks, on the other hand, involve predicting discrete categories, such as identifying spam versus non-spam emails or recognizing objects in images. Despite their differences, both rely on the same core principle of learning from labeled examples. Supervised learning plays a central role in many real-world machine learning applications. It typically requires large, high-quality datasets with reliable ground truth labels, and its success depends on how well the trained model can generalize beyond the data it was trained on. When applied effectively, supervised learning enables machines to make accurate, actionable predictions across a wide range of domains. The visualization below provides a concise summary of this information for quick reference. You can download a PDF of the infographic in high resolution here. Supervised Learning: Visualizing the Foundations of Machine Learning (click to enlarge)Image by Author Machine Learning Mastery Resources These are some selected resources for learning more about supervised learning: Supervised and Unsupervised Machine Learning Algorithms – This beginner-level article explains the differences between supervised, unsupervised, and semi-supervised learning, outlining how labeled and unlabeled data are used and highlighting common algorithms for each approach.Key takeaway: Knowing when to use labeled versus unlabeled data is fundamental to choosing the right learning paradigm. Simple Linear Regression Tutorial for Machine Learning – This practical, beginner-friendly tutorial introduces simple linear regression, explaining how a straight-line model is used to describe and predict the relationship between a single input variable and a numerical output.Key takeaway: Simple linear regression models relationships using a line defined by learned coefficients. Linear Regression for Machine Learning – This introductory article provides a broader overview of linear regression, covering how the algorithm works, key assumptions, and how it is applied in real-world machine learning workflows.Key takeaway: Linear regression serves as a core baseline algorithm for numerical prediction tasks. 4 Types of Classification Tasks in Machine Learning – This article explains the four primary types of classification problems — binary, multi-class, multi-label, and imbalanced classification — using clear explanations and practical examples.Key takeaway: Correctly identifying the type of classification problem guides model selection and evaluation strategy. One-vs-Rest and One-vs-One for Multi-Class Classification – This practical tutorial explains how binary classifiers can be extended to multi-class problems using One-vs-Rest and One-vs-One strategies, with guidance on when to use each.Key takeaway: Multi-class problems can be solved by decomposing them into multiple binary classification tasks. Be on the lookout for for additional entries in our series on visualizing the foundations of machine learning. About Matthew Mayo Matthew Mayo (@mattmayo13) holds a master’s degree in computer science and a graduate diploma in data mining. As managing editor of KDnuggets & Statology, and contributing editor at Machine Learning Mastery, Matthew aims to make complex data science concepts accessible. His professional interests include natural language processing, language models, machine learning algorithms, and exploring emerging AI. He is driven by a mission to democratize knowledge in the data science community. Matthew has been coding since he was 6 years old.
10 Ways to Use Embeddings for Tabular ML Tasks
10 Ways to Use Embeddings for Tabular ML TasksImage by Editor Introduction Embeddings — vector-based numerical representations of typically unstructured data like text — have been primarily popularized in the field of natural language processing (NLP). But they are also a powerful tool to represent or supplement tabular data in other machine learning workflows. Examples not only apply to text data, but also to categories with a high level of diversity of latent semantic properties. This article uncovers 10 insightful uses of embeddings to leverage data at its fullest in a variety of machine learning tasks, models, or projects as a whole. Initial Setup: Some of the 10 strategies described below will be accompanied by brief illustrative code excerpts. An example toy dataset used in the examples is provided first, along with the most basic and commonplace imports needed in most of them. import pandas as pd import numpy as np # Example customer reviews’ toy dataset df = pd.DataFrame({ “user_id”: [101, 102, 103, 101, 104], “product”: [“Phone”, “Laptop”, “Tablet”, “Laptop”, “Phone”], “category”: [“Electronics”, “Electronics”, “Electronics”, “Electronics”, “Electronics”], “review”: [“great battery”, “fast performance”, “light weight”, “solid build quality”, “amazing camera”], “rating”: [5, 4, 4, 5, 5] }) import pandas as pd import numpy as np # Example customer reviews’ toy dataset df = pd.DataFrame({ “user_id”: [101, 102, 103, 101, 104], “product”: [“Phone”, “Laptop”, “Tablet”, “Laptop”, “Phone”], “category”: [“Electronics”, “Electronics”, “Electronics”, “Electronics”, “Electronics”], “review”: [“great battery”, “fast performance”, “light weight”, “solid build quality”, “amazing camera”], “rating”: [5, 4, 4, 5, 5] }) 1. Encoding Categorical Features With Embeddings This is a useful approach in applications like recommender systems. Rather than being handled numerically, high-cardinality categorical features, like user and product IDs, are best turned into vector representations. This approach has been widely applied and shown to effectively capture the semantic aspects and relationships among users and products. This practical example defines a couple of embedding layers as part of a neural network model that takes user and product descriptors and converts them into embeddings. from tensorflow.keras.layers import Input, Embedding, Flatten, Dense, Concatenate from tensorflow.keras.models import Model # Numeric and categorical user_input = Input(shape=(1,)) user_embed = Embedding(input_dim=500, output_dim=8)(user_input) user_vec = Flatten()(user_embed) prod_input = Input(shape=(1,)) prod_embed = Embedding(input_dim=50, output_dim=8)(prod_input) prod_vec = Flatten()(prod_embed) concat = Concatenate()([user_vec, prod_vec]) output = Dense(1)(concat) model = Model([user_input, prod_input], output) model.compile(“adam”, “mse”) 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 from tensorflow.keras.layers import Input, Embedding, Flatten, Dense, Concatenate from tensorflow.keras.models import Model # Numeric and categorical user_input = Input(shape=(1,)) user_embed = Embedding(input_dim=500, output_dim=8)(user_input) user_vec = Flatten()(user_embed) prod_input = Input(shape=(1,)) prod_embed = Embedding(input_dim=50, output_dim=8)(prod_input) prod_vec = Flatten()(prod_embed) concat = Concatenate()([user_vec, prod_vec]) output = Dense(1)(concat) model = Model([user_input, prod_input], output) model.compile(“adam”, “mse”) 2. Averaging Word Embeddings for Text Columns This approach compresses multiple texts of variable length into fixed-size embeddings by aggregating word-wise embeddings within each text sequence. It resembles one of the most common uses of embeddings; the twist here is aggregating word-level embeddings into a sentence- or text-level embedding. The following example uses Gensim, which implements the popular Word2Vec algorithm to turn linguistic units (typically words) into embeddings, and performs an aggregation of multiple word-level embeddings to create an embedding associated with each user review. from gensim.models import Word2Vec # Train embeddings on the review text sentences = df[“review”].str.lower().str.split().tolist() w2v = Word2Vec(sentences, vector_size=16, min_count=1) df[“review_emb”] = df[“review”].apply( lambda t: np.mean([w2v.wv[w] for w in t.lower().split()], axis=0) ) from gensim.models import Word2Vec # Train embeddings on the review text sentences = df[“review”].str.lower().str.split().tolist() w2v = Word2Vec(sentences, vector_size=16, min_count=1) df[“review_emb”] = df[“review”].apply( lambda t: np.mean([w2v.wv[w] for w in t.lower().split()], axis=0) ) 3. Clustering Embeddings Into Meta-Features Vertically stacking multiple individual embedding vectors into a 2D NumPy array (a matrix) is the core step to perform clustering on a set of customer review embeddings and identify natural groupings that might relate to topics in the review set. This technique captures coarse semantic clusters and can yield new, informative categorical features. from sklearn.cluster import KMeans emb_matrix = np.vstack(df[“review_emb”].values) km = KMeans(n_clusters=3, random_state=42).fit(emb_matrix) df[“review_topic”] = km.labels_ from sklearn.cluster import KMeans emb_matrix = np.vstack(df[“review_emb”].values) km = KMeans(n_clusters=3, random_state=42).fit(emb_matrix) df[“review_topic”] = km.labels_ 4. Learning Self-Supervised Tabular Embeddings As surprising as it may sound, learning numerical vector representations of structured data — particularly for unlabeled datasets — is a clever way to turn an unsupervised problem into a self-supervised learning problem: the data itself generates training signals. While these approaches are a bit more elaborate than the practical scope of this article, they commonly use one of the following strategies: Masked feature prediction: randomly hide some features’ values — similar to masked language modeling for training large language models (LLMs) — forcing the model to predict them based on the remaining visible features. Perturbation detection: expose the model to a noisy variant of the data, with some feature values swapped or replaced, and set the training goal as identifying which values are “legitimate” and which ones have been altered. 5. Building Multi-Labeled Categorical Embeddings This is a robust approach to prevent runtime errors when certain categories are not in the vocabulary used by embedding algorithms like Word2Vec, while maintaining the usability of embeddings. This example represents a single category like “Phone” using multiple tags such as “mobile” or “touch.” It builds a composite semantic embedding by aggregating the embeddings of associated tags. Compared to standard categorical encodings like one-hot, this method captures similarity more accurately and leverages knowledge beyond what Word2Vec “knows.” tags = { “Phone”: [“mobile”, “touch”], “Laptop”: [“portable”, “cpu”], “Tablet”: [] # Added to handle the ‘Tablet’ product } def safe_mean_embedding(words, model, dim): vecs = [model.wv[w] for w in words if w in model.wv] return np.mean(vecs, axis=0) if vecs else np.zeros(dim) df[“tag_emb”] = df[“product”].apply( lambda p: safe_mean_embedding(tags[p], w2v, 16) ) tags = { “Phone”: [“mobile”, “touch”], “Laptop”: [“portable”, “cpu”], “Tablet”: [] # Added to handle the ‘Tablet’ product } def safe_mean_embedding(words, model, dim): vecs = [model.wv[w] for w in
How to Read a Machine Learning Research Paper in 2026
In this article, you will learn a practical, question-driven workflow for reading machine learning research papers efficiently, so you finish with answers — not fatigue. Topics we will cover include: Why purpose-first reading beats linear, start-to-finish reading. A lightweight triage: title + abstract + five-minute skim. How to target sections to answer your questions and retain what matters. Let’s not waste any more time. How to Read a Machine Learning Research Paper in 2026Image by Author Introduction When I first started reading machine learning research papers, I honestly thought something was wrong with me. I would open a paper, read the first few pages carefully, and then slowly lose focus. By the time I reached the middle, I felt tired, confused, and unsure what I had actually learned. During literature reviews, this feeling became even worse. Reading multiple long papers in a row drained my energy, and I often felt frustrated instead of confident. At first, I assumed this was just my lack of experience. But after talking to others in my research community, I realized this struggle is extremely common. Many beginners feel overwhelmed when reading papers, especially in machine learning where ideas, terminology, and assumptions move fast. Over time, and after spending more than two years around research, I realized the issue was not me. The issue was how I was reading papers. One Idea That Changed Everything for Me Most beginners approach research papers the same way they approach textbooks or articles: start from the beginning and read until the end. The problem is that research papers are not written to be read that way. They are written for people who already have questions in mind. If you read without knowing what you are looking for, your brain has no anchor. That is why everything starts to blur together after a few pages. Once I understood this, my entire approach changed. The biggest shift I made was simple: Never read a paper without a reason. A paper is not something you read just to finish it. You read it to answer questions. If you do not have questions, the paper will feel meaningless and exhausting. This idea really clicked for me after taking a course on Adaptive AI by Evan Shelhamer (formerly at Google DeepMind). I will not get into who originally proposed the technique, but the mindset behind it completely changed how I read papers. Since then, reading papers has felt lighter and much more manageable. And I will share the strategy in this article. Starting With Only the Title and Abstract Whenever I open a new paper now, I do not jump into the introduction. I only read two things: The title The abstract I spend no more than one or two minutes here. At this point, I am only trying to understand three things in a very rough way: What problem is this paper trying to solve? What kind of solution are they proposing? Do I care about this problem right now? If the answer to the last question is no, I skip the paper. And that is completely okay. You do not need to read every paper you open. Writing Down What Confuses You After reading the abstract, I stop. Before reading anything else, I write down what I did not understand or what made me curious. This step sounds small, but it makes a huge difference. For example, when I read the abstract of the paper “Test-Time Training with Self-Supervision for Generalization under Distribution Shifts”, I was confused at one point and wrote this question in my notes. What exactly do they mean by “turning a single unlabeled test sample into a self-supervised learning problem”? I knew what self-supervised learning was, but I could not picture how that would work for the problem being discussed in the paper. So I wrote that question down. That question gave me a reason to continue reading. I was no longer reading blindly. I was reading to find an answer. If you understand the problem statement reasonably well, pause for a moment and ask yourself: How would I approach this problem? What naive or baseline solution would I try? What assumptions would I make? This part is optional, but it helps you actively compare your thinking with the authors’ decisions. Doing a Quick Skim Instead of Deep Reading Once I have my questions, I do a quick skim of the paper. This usually takes around five minutes. I do not read every line. Instead, I focus on: The introduction, to see how the authors explain the problem—only if I am not aware of the background knowledge of that paper. Figures and diagrams, because they often explain more than text. A high-level look at the method section, just to see what is happening overall. The results, to understand what actually improved. At this stage, I am not trying to fully understand the method. I am just building a rough picture. Asking Better Questions After skimming, I usually end up with more questions than I started with. And that is a good thing. These questions are more specific now. They might be about why certain design choices were made, why some results look better than others, or what assumptions the method relies on. This is the point where reading starts to feel interesting instead of exhausting. Reading Only What Helps Answer Your Questions Now I finally read more carefully, but still not from start to end. I jump to the parts of the paper that help answer my questions. I search for keywords using Ctrl + F / Cmd + F, check the appendix, and sometimes skim related work that the authors say they are closely building on. My goal is not to understand everything. My goal is to understand what I care about. By the time I reach the end, I usually feel satisfied instead of tired, because my questions have been answered. I also start to see gaps, limitations, and opportunities much more clearly, because
Is Elon Musk-Owned X Down Globally? Users Report Issues With Mobile App And Website
X Down: Elon Musk-owned social media platform X (formerly Twitter) reportedly faced a service outage in India on Tuesday evening, leaving many users unable to access the app or the website. According to the Downdetector website, more than 5,000 complaints were logged within a short period, suggesting the issue was part of a wider global disruption. User reports indicated that most people encountered technical problems on the app. About 59% of users said they were unable to access the mobile app, making it the most common issue. Meanwhile, 33% reported difficulty loading the website, while the remaining 8% faced server-related issues or problems refreshing their feeds. The disruption was most pronounced in major cities such as Delhi, Jaipur, Ahmedabad, Indore, Chandigarh, Kolkata, and Mumbai, according to Downdetector’s outage map. Users also reported encountering error messages such as “Something went wrong,” with home feeds and notifications failing to load. Add Zee News as a Preferred Source Meanwhile, in the United States, more than 22,900 users had reported issues with the platform as of 9:19 am ET. Downdetector also recorded over 7,000 outage reports from users in the United Kingdom by 9:20 am ET, along with more than 2,700 reports from Canada.
Ever Wondered! If Internet Connection Is Wireless, Why It Show No Signal In Basements Or Lifts?
Even though mobile internet is wireless, many people notice no signal or signal disappears completely in basements and lifts. Have you ever wondered why this happens? The answer lies in simple technical explanations. This is not a device fault but a limitation of how wireless networks work. Mobile internet depends on radio waves sent from nearby cell towers, and these waves need a clear path to reach your phone. Mobile data works through cell towers that transmit radio signals through the air. Your phone connects to the nearest tower using these signals. The stronger and clearer the signal, the faster and more stable the internet connection. Open areas with fewer obstacles usually have better coverage. Add Zee News as a Preferred Source
COAI Urges Government To Cut Telecom Licence Fees And Revise Spectrum Pricing To Support Viksit Bharat Goals
Budget 2026-27: The telecom industry in India has asked the government to reduce regulatory and tax burdens ahead of the Union Budget 2026-27, saying that continued investment in next-generation networks depends on quick financial support to achieve the goal of a Viksit Bharat. On Tuesday, the Cellular Operators’ Association of India (COAI), which represents major mobile and internet service providers like Reliance Jio, Bharti Airtel, and Vodafone Idea, proposed reducing the telecom licence fee from 3% to just 0.5–1%, saying this would be enough to cover only administrative costs. The telecom industry asked the government to lower the GST on spectrum payments, license fees, and spectrum usage charges from 18 per cent to 5 per cent. COAI said this would not reduce the government’s revenue but would help telecom companies manage their taxes better. Add Zee News as a Preferred Source Telecom Operators License Fee At present, telecom operators pay 3% of their revenue as a licence fee and 5% to the Digital Bharat Nidhi. COAI has requested lowering the licence fee to just 0.5–1%, saying this would be enough to cover basic administrative costs. “COAI has been advocating measures that would reduce the sector’s financial burden, enabling further expansion and rollout of next-generation connectivity to achieve the goal of a Viksit Bharat,” said Lt. Gen. Dr. S.P. Kochhar, Director General of COAI. The Director General of COAI added that the licence fee, which combines the licence (3% of AGR) and Digital Bharat Nidhi contribution (5% of AGR), is a significant financial burden for licensed telcos. He further mentioned, “The Digital Bharat Nidhi contribution should be paused until the unused corpus has been fully utilised by the Department of Telecommunications.” COAI Recommends Special Benefits Meanwhile, COAI has recommended giving telecom operators special benefits under GST, such as exempting GST on regulatory payments like licence fees (LF), spectrum usage charges (SUC), and spectrum assigned through auctions. COAI also suggested allowing the use of existing input tax credit (ITC) balances to pay GST under the Reverse Charge Mechanism (RCM) on LF and SUC. This would not only protect cash flow for telecom companies but also help in using up the accumulated ITC. Lt. Gen. Dr. S.P. Kochhar, Director General of COAI, stated, “Since telecom today is no longer just a vertical, but a horizontal value-added enabler for all other sectors, a recalibration of spectrum pricing and assignment models is also necessary.” (With IANS Inputs)
Do You Know When The First Mobile Phone With Unlimited Storage Was Launched? Are Users Still Using It? – Explained
Smartphone With Unlimited Cloud Storage: In today’s fast-paced technological world, smartphones boast massive storage options, ranging from hundreds of gigabytes to even 1TB or 2TB, yet storage anxiety refuses to go away. Photos, videos, and apps still manage to fill up space faster than expected. Nearly a decade ago, however, a smartphone quietly offered a solution that felt almost unreal at the time. Google’s first Pixel phone arrived with a promise few could believe: unlimited cloud storage. With no number attached to its name, the original Pixel stood apart from the crowd. While the industry later shifted toward paid cloud plans and larger internal memory, this phone carved out a special place in tech history. Even today, many users continue to rely on it, keeping its story alive through everyday use. When Was the Mobile Phone With Unlimited Cloud Storage Launched? Add Zee News as a Preferred Source Tech giant Google launched the original Pixel and Pixel XL on October 4, 2016. With this launch, it introduced a feature that was truly special for its time. These phones offered free and unlimited cloud storage for photos and videos in original quality on Google Photos. This benefit was available for the lifetime of the device, meaning users did not have to worry about running out of space. Google continued this feature on later Pixel models, up to the Pixel 5. With unlimited cloud storage, users could capture photos and videos freely, knowing their memories were safely stored. This simple idea made Pixel phones both popular and unique. (Also Read: WhatsApp Zero-Day Attack: Even Missed Or Incoming Voice Call Can Hack Your Smartphone; Here’s How To Secure Device During Lohri) Google End Unlimited Cloud Storage In 2021 Google continued to offer free unlimited cloud storage on Pixel phones until the Pixel 5. However, on June 1, 2021, the company announced the end of unlimited free cloud storage for all users. After this change, every Google account started getting 15GB of free storage across Photos, Drive, and Gmail. At the same time, Pixel 3 and older Pixel models were allowed to use unlimited original-quality cloud storage until January 31, 2022. This shift was part of Google’s plan to turn its cloud storage service into a paid offering, encouraging users to subscribe for additional storage. What Google Pixel Users Get Today: Storage Explained Currently, anyone who buys a Pixel smartphone gets 15GB of free storage per Google account for Photos, Drive, and Gmail. If users need more space, they have to pay for it. However, Google still provides unlimited storage in “Storage Saver” quality for Pixel 5 and older models. This means users can continue storing unlimited photos and videos on these older devices, but not in the original high-quality format. Adding further, if you need more than 15GB of storage, you can purchase a Google One subscription, which offers various paid plans starting from 100GB or more, depending on your region. (Also Read: OPPO Reno15 Series Goes On Sale In India Alongside OPPO Enco Buds3 Pro+; Check Display, Camera, Battery, Price, And Bank Offers) Are Users Still Using Old Google Pixel Phones? Even after Google changed its policy, many people held on to their older Pixel smartphones. Over time, these devices found a new purpose, quietly serving as backup hubs for photos and videos. Even today, users rely on their old Pixels to store memories from their new devices, taking full advantage of the unlimited storage they once offered. The era of unlimited storage may be over, but this feature left a lasting mark, making the original Pixel phones one of the most memorable and cherished chapters in tech history.
97 Per Cent HR leaders In India Expect Humans To Work Alongside AI By 2027: Report | Technology News
Mumbai: Around 97 per cent of HR leaders in the tech sector in India feel that work by 2027 will be done by humans working alongside AI rather than engaging with it intermittently, a report said on Tuesday. The report from Nasscom and Indeed, based on a poll of over 120 HR leaders in the tech sector in the country, found that 20-40 per cent of work in technology firms is already AI‑driven. Around 45 per cent of respondents reported that over 40 per cent of software development is now handled by AI, it said. “As AI adoption deepens, skilling and capability building will be central to ensuring that talent continues to move up the value chain and delivers meaningful outcomes for businesses,” said Ketaki Karnik, Head of Research, Nasscom. Add Zee News as a Preferred Source The report highlighted a shift from AI as a supplementary tool to becoming an integral part of everyday roles, workflows, and decision-making processes, with strong participation in intelligent automation (39 per cent) and business process management (37 per cent). Meanwhile, over half of respondents cited low‑quality or incomplete AI outputs, underscoring the need for human oversight. Most effective human-AI partnerships are emerging across higher-order activities such as scope definition, system architecture, and data model design. More routine and repeatable tasks, including boilerplate code generation and unit test creation, are expected to be increasingly automated by AI over the next two to three years, the report said. Hiring is evolving toward skills‑based assessment, with 85 per cent of managers prioritising skills-based hiring over credentials and 98 per cent highlighting the need for hybrid and multidisciplinary skills. Around 83 per cent of HR leaders redesigned work by adding AI-specific roles. With respect to AI adoption, 79 per cent prioritised internal reskilling as a dominant strategy. Around 80 per cent of organisations followed a hybrid approach, with most employees working from the office three or more days a week, the report noted.
India AI Impact Summit 2026 To Spotlight AI Solutions Transforming Education, Healthcare, And Governance: Experts | Technology News
AI Impact Summit 2026: The upcoming ‘India AI Impact Summit 2026’ will position the country as a landmark global destination that will shape the future of responsible and inclusive Artificial Intelligence (AI), experts have said. According to an IT Ministry statement on Tuesday, the 38th episode of ‘Digital India Ask Our Experts’ highlighted the ‘India AI Impact Summit 2026’ in the national capital from February 16-20. Experts explained how the Summit is built around the three guiding pillars or ‘Sutras’ of People, Planet and Progress, with focused working groups or ‘Chakras’. The discussions and outcomes from these groups are expected to influence AI policy, skilling strategies and implementation across India and the Global South, said the ministry. They also highlighted opportunities for youth, startups, women innovators and learners from Tier-2 and 3 cities, including AI and Data Labs, global challenges, pitch fests and the ‘YUVAI Global Youth Challenge’. “Viewers were informed about the ‘India AI Impact Expo 2026’, to be held at Bharat Mandapam from February 16–20, which will demonstrate how AI solutions are transforming sectors such as education, healthcare, agriculture and governance,” the ministry statement said. Add Zee News as a Preferred Source It further stated that citizens raised questions on AI infrastructure, open data access, healthcare datasets, startup participation, governance, inclusion of non-tech users, and online participation. Experts assured that IndiaAI is working towards open, secure and inclusive platforms that enable participation from individuals, small teams and public sector organisations. Last week, Prime Minister Narendra Modi urged startups to leverage AI for societal good. Chairing a roundtable with Indian AI startups at his residence at 7, Lok Kalyan Marg, the Prime Minister urged making AI affordable, inclusive, and transparent. Calling his interaction with the youngsters “memorable and insightful”, he urged them to use AI for the betterment of society. PM Modi also lauded the AI-based startups for working in myriad fields ranging from e-commerce to material research to healthcare.
WhatsApp Zero-Day Attack: Even Missed Or Incoming Voice Call Can Hack Your Smartphone; Here’s How To Secure Device During Lohri | Technology News
WhatsApp Zero-Day Attack: For millions of people, WhatsApp is a crucial part of daily life, used for chatting with friends, making calls, and sharing moments. But a fresh cybersecurity alert has raised serious concerns. Experts have discovered a “zero-day” vulnerability in WhatsApp’s voice call feature that could allow hackers to take control of a smartphone through a single incoming call, especially during festive occasions like Lohri. However, this vulnerability is undiscovered by the software developer until it is actively exploited. In a time when our phones store so much private information, personal pics, contact list, and sensitive financial information. This warning shows that even a simple call could be enough for hackers to access your device and compromise your privacy. What Is Zero-Day Attack? Add Zee News as a Preferred Source A zero-day threat on WhatsApp is a type of security vulnerability that hackers can exploit before the app developers even know it exists or have released a fix. The term “zero-day” means that developers have zero days to fix the problem once it is discovered. In the case of WhatsApp, this threat could allow hackers to access a user’s phone, steal data, or install malware through the voice call feature. The most dangerous part is that the attack can happen without the user answering the call or clicking on any link, making it very hard to detect. Festive Ocassions Bring Higher Risk Of WhatsApp Threats Researchers who study cybercrime say that these attacks often increase when people are more active online. During festive seasons like Lohri, holidays, or travel times, there are more calls and messages from unknown numbers, which makes it easier for hackers to hide their actions. In some cases, even a missed or incoming WhatsApp voice call can be enough for the attack to happen. WhatsApp Zero-Day Attack: How To Secure Your Device Step 1: Always keep WhatsApp and your phone’s operating system updated to the latest version. Step 2: Turn on two-step verification in WhatsApp for extra account security. Step 3: Use privacy settings to block or mute calls from unknown numbers. Step 4: Do not click on suspicious links, attachments, or festive messages from strangers. Step 5: Handle banking and payment tasks only through official apps. Step 6: If your phone behaves strangely or you suspect a hack, report it immediately to cyber helplines or authorities.