Training a language model with a deep transformer architecture is time-consuming. However, there are techniques you can use to accelerate training. In this article, you will learn about: Using torch.compile() to speed up the model Using gradient accumulation to train a model with a larger effective batch size Let’s get started! Train a Model Faster with torch.compile and Gradient AccumulationPhoto by François Genon. Some rights reserved. Overview This article is divided into two parts; they are: Using torch.compile() Gradient Accumulation Using torch.compile When you write your model code and run it with PyTorch, the code is executed in eager mode. This means the code is executed line by line, and the results are stored in memory. This is native to Python since it is an interpreted language. You know this is the case because when you make a mistake in your code, you will not see the error until you run that line of code. Running a model in eager mode is slow. Starting with PyTorch 2.0, you can use torch.compile() to compile a model for improved performance. This generates a new model object that is optimized. It is not the same model object you created using nn.Module, but it shares the same tensors with the original model. You can use this compiled model for forward pass, backward pass, and optimizer updates as usual. Building a model and compiling it as a computation graph is how TensorFlow 1.0 was supposed to work. This makes debugging harder, since the model you execute cannot match line by line with the code you wrote. Therefore, you should not compile your model until you have run a trial and confirmed that it is error-free. Not all models can be compiled. However, if your model supports compilation, you immediately benefit from the speedup. To compile a model, all you need to do is replace the model object right before you are ready to use it: … model = LlamaForPretraining(model_config).to(device) model.load_state_dict(checkpoint) model = torch.compile(model) … … model = LlamaForPretraining(model_config).to(device) model.load_state_dict(checkpoint) model = torch.compile(model) … Do not load the model weights after compilation. This is because the compiled model is an object that shares the same weights as the original model. During compilation, the computation graph is built referencing the weight tensors of the original model. If you load the weights after compilation, the model may not work as expected. Similarly, to save the compiled model, you should refer to the original model’s state dict, as follows: torch.save(getattr(model, “_orig_mod”, model).state_dict(), “model.pth”) torch.save(getattr(model, “_orig_mod”, model).state_dict(), “model.pth”) The original model can be accessed from the compiled model using model._orig_mod. In the code above, we use getattr(model, “_orig_mod”, model) to get the original model if it exists, or use model itself if it does not. This line of code works for both compiled and original models. Gradient Accumulation When you train a model, you likely spend two to three times more time on the backward pass than the forward pass. This is because the backward pass is more computationally intensive and uses more memory. One easy trick to speed up training is to perform fewer backward passes. This can be achieved by increasing the batch size: with the same number of data samples, a larger batch size means fewer batches to process. However, a larger batch size requires more memory. In a memory-constrained environment, you can mimic a larger batch size by running multiple forward passes and accumulating the gradients. This is called gradient accumulation. It is easier to explain this idea with code: .. accumulate_steps = 4 for epoch in range(num_epochs): optimizer.zero_grad() for i, batch in enumerate(dataloader): # get batched data input_ids, target_ids = batch # create attention mask: causal mask + padding mask attn_mask = create_causal_mask(input_ids.shape[1], device) + \ create_padding_mask(input_ids, PAD_TOKEN_ID, device) # extract output from model logits = model(input_ids, attn_mask) # compute loss: cross-entropy between logits and target, ignoring padding tokens loss = loss_fn(logits.view(-1, logits.size(-1)), target_ids.view(-1)) loss = loss / accumulate_steps # Run backward, but update only once every `accumulate_steps` steps loss.backward() if (i + 1) % accumulate_steps == 0: torch.nn.utils.clip_grad_norm_(model.parameters(), 1.0) optimizer.step() optimizer.zero_grad() scheduler.step() 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 .. accumulate_steps = 4 for epoch in range(num_epochs): optimizer.zero_grad() for i, batch in enumerate(dataloader): # get batched data input_ids, target_ids = batch # create attention mask: causal mask + padding mask attn_mask = create_causal_mask(input_ids.shape[1], device) + \ create_padding_mask(input_ids, PAD_TOKEN_ID, device) # extract output from model logits = model(input_ids, attn_mask) # compute loss: cross-entropy between logits and target, ignoring padding tokens loss = loss_fn(logits.view(–1, logits.size(–1)), target_ids.view(–1)) loss = loss / accumulate_steps # Run backward, but update only once every `accumulate_steps` steps loss.backward() if (i + 1) % accumulate_steps == 0: torch.nn.utils.clip_grad_norm_(model.parameters(), 1.0) optimizer.step() optimizer.zero_grad() scheduler.step() The training loop above is an excerpt from the previous article for training a Llama model on your local GPU. Normally, when you run a forward pass, you calculate the loss. Then you call loss.backward() to backpropagate the loss gradient through the model parameters. In PyTorch, the backward() method is cumulative, meaning gradients are added up. Therefore, you need to call optimizer.zero_grad() explicitly to clear the gradients before running the backward pass. In the code above, you deliberately do not call optimizer.zero_grad() in every iteration. Instead, you run backpropagation for the loss divided by accumulate_steps. This way, the gradients are scaled down but accumulated over accumulate_steps iterations. Once every accumulate_steps iterations, you run the optimizer to adjust the model parameters. This approach yields results comparable to using a larger batch size. However, since you run fewer optimizer updates, the learning rate schedule should be adjusted accordingly. This means you need to initialize the scheduler with a different number of steps: … num_training_steps = (len(dataloader) // accumulate_steps) * num_epochs cosine_scheduler = lr_scheduler.CosineAnnealingLR( optimizer, T_max=num_training_steps – num_warmup_steps, eta_min=0 ) … num_training_steps = (len(dataloader) // accumulate_steps) * num_epochs cosine_scheduler = lr_scheduler.CosineAnnealingLR( optimizer, T_max=num_training_steps – num_warmup_steps, eta_min=0 ) Further Reading Below are some materials that you may find interesting:
AI Deepfakes, Bot Networks And Digital Warfare: Pakistan-Linked Accounts Are Weaponising AI Against India To Spark Communal Tensions | Technology News
AI-Generated Deepfakes: Dozens of AI‑generated videos and images pushed by accounts linked to Pakistan’s security establishment have flooded the country’s social media in recent months, aimed at inflaming communal tensions and spreading false narratives against India, according to a media report. The International Business Times report said that journalists and analysts found many viral posts came from X accounts tied to Pakistan’s military and intelligence establishment. Fact‑checkers have debunked manipulated clips that mimic news formats but exhibit uncanny audiovisual glitches, repetitive eye movements, clipped speech, and misaligned lip‑sync. “The trend is troubling for regional stability and for Pakistan’s own information ecosystem—and countermeasures will require international vigilance to stop Pakistan from spreading mass disinformation on social media,” the report said. Examples include an AI-generated clip showing IAF chief, Air Chief Marshal A.P. Singh, criticising India’s Tejas fighter and a clip attributed to former Indian Army chief V.P. Malik spouting fake communal rhetoric. An alleged circulator of such videos, ‘PakVocals’ account was followed by Pakistan’s Information and Broadcasting Minister, Ataullah Tarar, suggesting high-level interest or endorsement from the country’s top leadership, the report said. Add Zee News as a Preferred Source Further, the coordination style, including rapid deletions after posts and networks amplifying one another, resembles a managed influence operation more than that of random amateurs. In media statements and press briefings, Pakistani officials have acknowledged an “organised disinformation” problem even as they publicly target others for it, it added. Even international conflicts have been warped in this Pakistan-led disinformation campaign. Examples include the Israel-Iran war in 2025, when several Pakistani news outlets aired an AI tampered video of an Israeli studio supposedly invaded, not realising the footage was entirely fake. Similarly, the AI-manipulated videos of Indian journalist Palki Sharma Upadhyay are circulating in Pakistani social media networks. These fake clips showed her promoting Indian government-backed financial investment platforms or questioning diplomatic protocols for Prime Minister Narendra Modi’s visit to Jordan, the report said.
Want To Change Your Gmail Address Name Without Losing Data? Here’s How To Do It And Check Limitations | Technology News
Google’s Gmail Address Change: Did you know that many of us created our Gmail IDs during school days, choosing fun and fancy names without much thought? At the time, it felt exciting and cool. Years later, the same email ID appears on resumes, office emails, and official forms, suddenly feeling awkward and childish. Many people wanted to change it but could not. For a long time, Google’s email service did not allow users to edit or rename their Gmail addresses, leaving them stuck with their old choices. Now, there is good news. Google has rolled out a new feature that allows users with “@gmail.com” addresses to change their email addresses for the very first time. The new functionality lets users change their “@gmail.com” email address to a different “@gmail.com” address with a new username. The company announced this update on its support page, although the information currently appears only in Hindi, suggesting an early rollout before the official announcement. Notably, the tech giant Google indicated the feature is rolling out gradually to all users, though the specific timeline for its availability to everyone remains unknown. Google’s Revised Support Document Add Zee News as a Preferred Source According to Google’s updated support page, Gmail users will now be able to change their email addresses for the first time without losing their account data, saved emails, or access. This is a big change from Google’s earlier policy, which clearly stated that users usually could not change their Gmail addresses. Google’s Gmail Address: How To Change And How It Works Once this feature is available, users will be able to change their Gmail address from the My Account settings. After the change, the old email ID will not be deleted. It will continue to work as an alias. This means emails sent to both the old and new addresses will arrive in the same inbox. Users can also sign in to Google services like Gmail, Maps, YouTube, and Drive using either email address. Nothing in the account will change during this process. Your photos, messages, and old emails will remain safe. The old email address will stay linked to your account and cannot be used by anyone else, helping protect your identity. (Also Read: Year-Ender 2025 Wake-Up Call: How Major Cyber And Data Breaches Redefined Cybersecurity Skills-Explained) Google’s Gmail Address Change: Limitations Google has rolled out few limits on the new functionality as well. You can change your Gmail address only three times, and each account can have up to four email addresses. After adding a new address, you must wait 12 months before changing or removing it. During this time, the old email address cannot be used to create a new account. In some cases, the old Gmail address may still appear. However, users can still send emails from the old address if needed.
Training a Model on Multiple GPUs with Data Parallelism
import dataclasses import os import datasets import tqdm import tokenizers import torch import torch.distributed as dist import torch.nn as nn import torch.nn.functional as F import torch.optim.lr_scheduler as lr_scheduler from torch import Tensor from torch.nn.parallel import DistributedDataParallel as DDP from torch.utils.data.distributed import DistributedSampler # Build the model @dataclasses.dataclass class LlamaConfig: “”“Define Llama model hyperparameters.”“” vocab_size: int = 50000 # Size of the tokenizer vocabulary max_position_embeddings: int = 2048 # Maximum sequence length hidden_size: int = 768 # Dimension of hidden layers intermediate_size: int = 4*768 # Dimension of MLP’s hidden layer num_hidden_layers: int = 12 # Number of transformer layers num_attention_heads: int = 12 # Number of attention heads num_key_value_heads: int = 3 # Number of key-value heads for GQA class RotaryPositionEncoding(nn.Module): “”“Rotary position encoding.”“” def __init__(self, dim: int, max_position_embeddings: int) -> None: “”“Initialize the RotaryPositionEncoding module Args: dim: The hidden dimension of the input tensor to which RoPE is applied max_position_embeddings: The maximum sequence length of the input tensor ““” super().__init__() self.dim = dim self.max_position_embeddings = max_position_embeddings # compute a matrix of n\theta_i N = 10_000.0 inv_freq = 1.0 / (N ** (torch.arange(0, dim, 2) / dim)) inv_freq = torch.cat((inv_freq, inv_freq), dim=–1) position = torch.arange(max_position_embeddings) sinusoid_inp = torch.outer(position, inv_freq) # save cosine and sine matrices as buffers, not parameters self.register_buffer(“cos”, sinusoid_inp.cos()) self.register_buffer(“sin”, sinusoid_inp.sin()) def forward(self, x: Tensor) -> Tensor: “”“Apply RoPE to tensor x Args: x: Input tensor of shape (batch_size, seq_length, num_heads, head_dim) Returns: Output tensor of shape (batch_size, seq_length, num_heads, head_dim) ““” batch_size, seq_len, num_heads, head_dim = x.shape dtype = x.dtype # transform the cosine and sine matrices to 4D tensor and the same dtype as x cos = self.cos.to(dtype)[:seq_len].view(1, seq_len, 1, –1) sin = self.sin.to(dtype)[:seq_len].view(1, seq_len, 1, –1) # apply RoPE to x x1, x2 = x.chunk(2, dim=–1) rotated = torch.cat((–x2, x1), dim=–1) output = (x * cos) + (rotated * sin) return output class LlamaAttention(nn.Module): “”“Grouped-query attention with rotary embeddings.”“” def __init__(self, config: LlamaConfig) -> None: super().__init__() self.hidden_size = config.hidden_size self.num_heads = config.num_attention_heads self.head_dim = self.hidden_size // self.num_heads self.num_kv_heads = config.num_key_value_heads # GQA: H_kv < H_q # hidden_size must be divisible by num_heads assert (self.head_dim * self.num_heads) == self.hidden_size # Linear layers for Q, K, V projections self.q_proj = nn.Linear(self.hidden_size, self.num_heads * self.head_dim, bias=False) self.k_proj = nn.Linear(self.hidden_size, self.num_kv_heads * self.head_dim, bias=False) self.v_proj = nn.Linear(self.hidden_size, self.num_kv_heads * self.head_dim, bias=False) self.o_proj = nn.Linear(self.num_heads * self.head_dim, self.hidden_size, bias=False) def forward(self, hidden_states: Tensor, rope: RotaryPositionEncoding, attn_mask: Tensor) -> Tensor: bs, seq_len, dim = hidden_states.size() # Project inputs to Q, K, V query_states = self.q_proj(hidden_states).view(bs, seq_len, self.num_heads, self.head_dim) key_states = self.k_proj(hidden_states).view(bs, seq_len, self.num_kv_heads, self.head_dim) value_states = self.v_proj(hidden_states).view(bs, seq_len, self.num_kv_heads, self.head_dim) # Apply rotary position embeddings query_states = rope(query_states) key_states = rope(key_states) # Transpose tensors from BSHD to BHSD dimension for scaled_dot_product_attention query_states = query_states.transpose(1, 2) key_states = key_states.transpose(1, 2) value_states = value_states.transpose(1, 2) # Use PyTorch’s optimized attention implementation # setting is_causal=True is incompatible with setting explicit attention mask attn_output = F.scaled_dot_product_attention( query_states, key_states, value_states, attn_mask=attn_mask, dropout_p=0.0, enable_gqa=True, ) # Transpose output tensor from BHSD to BSHD dimension, reshape to 3D, and then project output attn_output = attn_output.transpose(1, 2).reshape(bs, seq_len, self.hidden_size) attn_output = self.o_proj(attn_output) return attn_output class LlamaMLP(nn.Module): “”“Feed-forward network with SwiGLU activation.”“” def __init__(self, config: LlamaConfig) -> None: super().__init__() # Two parallel projections for SwiGLU self.gate_proj = nn.Linear(config.hidden_size, config.intermediate_size, bias=False) self.up_proj = nn.Linear(config.hidden_size, config.intermediate_size, bias=False) self.act_fn = F.silu # SwiGLU activation function # Project back to hidden size self.down_proj = nn.Linear(config.intermediate_size, config.hidden_size, bias=False) def forward(self, x: Tensor) -> Tensor: # SwiGLU activation: multiply gate and up-projected inputs gate = self.act_fn(self.gate_proj(x)) up = self.up_proj(x) return self.down_proj(gate * up) class LlamaDecoderLayer(nn.Module): “”“Single transformer layer for a Llama model.”“” def __init__(self, config: LlamaConfig) -> None: super().__init__() self.input_layernorm = nn.RMSNorm(config.hidden_size, eps=1e–5) self.self_attn = LlamaAttention(config) self.post_attention_layernorm = nn.RMSNorm(config.hidden_size, eps=1e–5) self.mlp = LlamaMLP(config) def forward(self, hidden_states: Tensor, rope: RotaryPositionEncoding, attn_mask: Tensor) -> Tensor: # First residual block: Self-attention residual = hidden_states hidden_states = self.input_layernorm(hidden_states) attn_outputs = self.self_attn(hidden_states, rope=rope, attn_mask=attn_mask) hidden_states = attn_outputs + residual # Second residual block: MLP residual = hidden_states hidden_states = self.post_attention_layernorm(hidden_states) hidden_states = self.mlp(hidden_states) + residual return hidden_states class LlamaModel(nn.Module): “”“The full Llama model without any pretraining heads.”“” def __init__(self, config: LlamaConfig) -> None: super().__init__() self.rotary_emb = RotaryPositionEncoding( config.hidden_size // config.num_attention_heads, config.max_position_embeddings, ) self.embed_tokens = nn.Embedding(config.vocab_size, config.hidden_size) self.layers = nn.ModuleList([LlamaDecoderLayer(config) for _ in range(config.num_hidden_layers)]) self.norm = nn.RMSNorm(config.hidden_size, eps=1e–5) def forward(self, input_ids: Tensor, attn_mask: Tensor) -> Tensor: # Convert input token IDs to embeddings hidden_states = self.embed_tokens(input_ids) # Process through all transformer layers, then the final norm layer for layer in self.layers: hidden_states = layer(hidden_states, rope=self.rotary_emb, attn_mask=attn_mask) hidden_states = self.norm(hidden_states) # Return the final hidden states return hidden_states class LlamaForPretraining(nn.Module): def __init__(self, config: LlamaConfig) -> None: super().__init__() self.base_model = LlamaModel(config) self.lm_head = nn.Linear(config.hidden_size, config.vocab_size, bias=False) def forward(self, input_ids: Tensor, attn_mask: Tensor) -> Tensor: hidden_states = self.base_model(input_ids, attn_mask) return self.lm_head(hidden_states) def create_causal_mask(batch: Tensor, dtype: torch.dtype = torch.float32) -> Tensor: “”“Create a causal mask for self-attention. Args: batch: Batch of sequences, shape (batch_size, seq_len) dtype: Data type of the mask Returns: Causal mask of shape (seq_len, seq_len) ““” batch_size, seq_len = batch.shape mask = torch.full((seq_len, seq_len), float(‘-inf’), device=batch.device, dtype=dtype) \ .triu(diagonal=1) return mask def create_padding_mask(batch: Tensor, padding_token_id: int, dtype: torch.dtype = torch.float32) -> Tensor: “”“Create a padding mask for a batch of sequences for self-attention. Args: batch: Batch of sequences, shape (batch_size, seq_len) padding_token_id: ID of the padding token dtype: Data type of the mask Returns: Padding mask of shape (batch_size, 1, seq_len, seq_len) ““” padded = torch.zeros_like(batch, device=batch.device, dtype=dtype) \ .masked_fill(batch == padding_token_id, float(‘-inf’)) mask = padded[:,:,None] + padded[:,None,:] return mask[:, None, :, :] # Generator function to create padded sequences of fixed length class PretrainingDataset(torch.utils.data.Dataset): def __init__(self, dataset: datasets.Dataset, tokenizer: tokenizers.Tokenizer, seq_length: int): self.dataset = dataset self.tokenizer = tokenizer self.seq_length = seq_length self.bot = tokenizer.token_to_id(“[BOT]”) self.eot = tokenizer.token_to_id(“[EOT]”) self.pad = tokenizer.token_to_id(“[PAD]”) def __len__(self): return len(self.dataset) def __getitem__(self, index): “”“Get a sequence of
Year-Ender 2025 Wake-Up Call: How Major Cyber And Data Breaches Redefined Cybersecurity Skills-Explained | Technology News
Year-Ender 2025’s Wake-Up Call: As 2025 comes to an end, it leaves behind more than just headlines. The year leaves a clear warning. This year, a series of major security and data breaches shook companies, governments, and everyday users alike. What once felt like distant cyber threats suddenly became personal, with leaked data, locked systems, and broken trust making the risks impossible to ignore. From global companies to small businesses, no one was completely secure. The enterprise arm of global cybersecurity solutions provider Quick Heal Technologies Limited, Seqrite, released the India Cyber Threat Report 2026. The report claimed that India recorded more than 265 million cyberattacks in 2025. These numbers were not just data in a report. They reflected real attacks throughout the year. In May 2025, after India launched Operation Sindoor against terror infrastructure in Pakistan and Pakistan-occupied Kashmir in response to the Pahalgam attack, a wave of cyberattacks followed. Indian government platforms and critical infrastructure systems were targeted, showing how real-world conflicts quickly moved into the digital space. Add Zee News as a Preferred Source June 2025 brought the biggest shock, when over 16 billion login details were leaked across 30 datasets. Accounts linked to Apple, Facebook, Google, GitHub, Telegram, and government portals were affected. Trust slipped further after a ChatGPT-related Mixpanel breach, while BSNL faced a double attack that disrupted services. Adding to the concern, the President’s website was hit by a DDoS attack that lasted nearly 19 hours and aimed to shut it down. These incidents did more than expose weak systems. They revealed a growing gap in real-world cybersecurity skills. As attackers became smarter and faster, defenders were forced to rethink old methods. 2025 did not just change how we view online safety. It became a wake-up call that redefined what it truly means to be cyber-ready. Commenting on the changing cybersecurity landscape, Ravi Kaklasaria, CEO and Co-Founder of edForce, said that 2025 has exposed serious gaps in organisational readiness. He said, “2025 has been a wake-up call for every organisation. The breaches we’re seeing are not failures of technology, but failures of preparedness. Adding further, Ravi Kaklasaria stated that Cybersecurity today is deeply intertwined with AI, cloud, and automation, and that requires an entirely new skill mindset. Organisations must move beyond checkbox training and invest in continuous, hands-on upskilling that mirrors real-world attack scenarios. Cyber resilience will belong to those who treat skills as critical infrastructure, not a one-time initiative.” AI Is Redefining The Threat Landscape AI is changing the game in cybersecurity, turning it into a high-speed race between attackers and defenders. In 2025, IBM’s Cost of a Data Breach Report showed that the average cost of a breach fell to $4.44 million, a 9% drop, thanks to AI helping companies spot and contain attacks faster. On average, AI users saved $2.22 million per incident. But AI is not without risks. Many organizations are using it without proper controls. About 86 percent of leaders reported incidents where AI was used without access controls, and 63 percent lacked proper governance. This has made phishing, ransomware, and even deepfake attacks faster and more dangerous. Experts warn that cybercrime damages could reach $10.5 trillion this year, making smart AI defenses more important than ever. Highlighting the growing impact of cyber breaches, Mr. Nikhil Jhanji, Principal Product Manager at Privy by IDfy, said that data leaks in 2025 are no longer limited to compromised systems. “In 2025, we have observed that data breaches are increasingly becoming PII calamities rather than isolated security incidents. What gets exposed is not just databases, but identities, financial histories, behavioural signals, and long forgotten consent trails. The core issue is excessive data collection and retention. When breaches happen, attackers inherit years of personal data, turning every incident into immediate regulatory, legal, and trust fallout.” How High-Profile Cyber Data Breaches Changed Security Skill Requirements The High-profile cyber data breaches in 2025 forced organisations to rethink their cybersecurity skill requirements. Companies realised that traditional network security was no longer enough, as threats became more complex and widespread. The focus shifted to advanced areas such as cloud security, AI-driven threat detection, and zero-trust frameworks. As cyberattacks grow more complex, companies must rethink their security strategies. Mr. Pankaj Tripathi, CEO & Founder of Vernost, stressed, “In 2025, cybersecurity attacks have become smarter, faster, and increasingly AI-driven. Threat actors are no longer just targeting systems, they’re targeting trust, identity, and business continuity. At Vernost, we believe resilience comes from proactive security: continuous monitoring, zero-trust architecture, and real-time threat response. The organizations that invest in prevention today will be the ones that stay protected tomorrow.” A sharp rise in supply-chain attacks and ransomware incidents also increased demand for expertise in digital forensics, risk assessment, and rapid incident response. These breaches exposed serious skill gaps across security teams, making it clear that tools alone cannot prevent attacks. As a result, organisations began investing heavily in continuous upskilling, encouraging closer collaboration between IT and security teams, and adopting security-first approaches across their digital transformation strategies. Conclusion: Major cyber and data breaches have shown that cybersecurity is no longer just about tools or IT teams. As attacks become more advanced, organisations need skills in cloud security, AI-based threat detection, incident response, and data protection. The focus is shifting from reacting to attacks to staying prepared at all times. Continuous upskilling, hands-on training, and better collaboration across teams are now essential. Companies that invest in the right skills will be better equipped to protect data, maintain customer trust, and stay resilient in a fast-changing threat landscape.
India Well Positioned To Lead AI-Driven Tech Future: Industry | Technology News
New Delhi: As we move toward 2026, India’s technology sector is entering a phase where scale, accountability, and outcomes matter more than momentum alone, according to industry leaders. According to Sindhu Gangadharan, MD, SAP Labs India and Chairperson, Nasscom, the industry has built strong foundations across AI, cloud, cybersecurity, and digital platforms, supported by deep talent and a mature ecosystem of startups, GCCs, and global enterprises. “The next chapter is about converting capability into sustained business and societal impact. AI adoption is becoming sharper and more grounded in real use cases. Enterprises are asking clearer questions around productivity, resilience, and trust,” she mentioned. Add Zee News as a Preferred Source Enterprises expect technology to integrate seamlessly into core processes, not sit at the edges as experimentation. This shift places responsibility on the industry to design solutions that are secure, explainable, and aligned with long-term value creation. “India is well positioned to lead this phase. Our strength lies in combining engineering depth with domain understanding and scale execution. As an industry, success in 2026 will depend on how well we collaborate across ecosystems, invest in skills, and apply technology with purpose,” Gangadharan mentioned. The opportunity ahead is significant, to strengthen enterprises, empower people, and reinforce India’s role as a trusted global technology partner. By 2026, the real measure of AI success will be its ability to deliver consistent, context-aware outcomes for customers. “We are seeing a clear shift away from generic intelligence toward AI that understands the nuances of an enterprise, its data, processes, policies, and customer behaviours. Customer-specific AI performs better because relevance drives decisions, not raw intelligence alone,” she noted. As enterprises look ahead, the focus is clear. AI must move from novelty to reliability. The strongest advantage will come from intelligence that genuinely understands how a business operates and how it serves its customers every day.
Nothing OS 4.0 Update Rolled Out For CMF Phone 1 And CMF Phone 2 Pro Users In India; Check Features And Timeline | Technology News
Nothing OS 4.0 Update Features: Nothing’s sub-brand CMF has started rolling out Nothing OS 4.0 updates to its smartphone lineup. This update marks a major software upgrade for both the CMF Phone 1 and the newer CMF Phone 2 Pro. The update has begun the public rollout of Nothing OS 4.0 based on Android 16 to CMF Phone 1 and CMF Phone 2 Pro users in India. The software upgrade brings new features, UI enhancements, smoother animations and deeper customisation options that refine user experiences. Notably, the update will be rolled out for the CMF Phone 1 first, and later for the Phone 2 Pro model. Nothing OS 4.0 Update: Features Add Zee News as a Preferred Source CMF phones running Nothing OS 4.0 will get a refined and cleaner user interface with updated UI elements, refreshed status bar icons, new lock screen clock styles, and a simpler Quick Settings layout for easier access. The update also introduces an enhanced Extra Dark Mode that delivers deeper blacks, better contrast, and reduced power consumption. Users will see more widget sizes and layouts for apps like Weather, Pedometer, and Screen Time, offering better customisation. Adding further, CMF Phone users can hide apps from the App Drawer, enjoy improved haptic feedback at both maximum and minimum volume levels, and experience richer notification interactions. Notably, Nothing OS 4.0 is a major Android upgrade, moving devices from Android 15 to Android 16, bringing the latest security updates, performance improvements, and system-level enhancements. Nothing OS 4.0 Timeline The update is rolling out in phases. CMF Phone 1 users will receive Nothing OS 4.0 first, while the CMF Phone 2 Pro will get the update in early January. The update will be delivered automatically via OTA over the next few days.
iPhone 18 Likely To Arrive In Spring 2026: Expected Price, Camera, Processor & More | Technology News
iPhone 18: Ahead of the 2026 iPhone lineup launch, leaks and reports have already started coming in. The affordable iPhone 18 model is expected to have a different launch timeline than its predecessor. Apple, a Cupertino-based tech giant, usually launches a new iPhone series in September. But that may not happen in 2026. Reports suggest that the company could launch the affordable iPhone 18 in Spring 2026. That would be a few months earlier than the usual timeline. The launch timeline of pro models will remain unchanged. Apple usually launches its new iPhone series in September. The iPhone 17 series – iPhone 17, iPhone 17 Pro, iPhone 17 Max and iPhone 17 Air – was launched in September this year. Similarly, the iPhone 16 series was also unveiled in September 2024. But this trend might change in 2026. A 9to5Mac report recently suggested that the iPhone 18 is likely to come in Spring 2026 alongside the iPhone 17e, earlier than the usual September launch. If this turns out to be true, it would mark a big shift in Apple’s launch strategy. According to the media reports, the iPhone 18 will be powered by Apple’s next-generation A20 Bionic chipset. The new processor is expected to deliver better performance and improved power efficiency compared to the A19 chip used in the iPhone 17 lineup. Camera and battery upgrades are also likely. Add Zee News as a Preferred Source Reports suggest that the front camera may move to the top-left corner of the display. Apple is also said to be testing under-display Face ID. This could reduce the size of visible sensors and free up more screen space. In terms of cameras, the iPhone 18 may continue with the 48MP main and ultra-wide sensors. However, improved image processing is expected. Apple may also offer 12GB RAM across all iPhone 18 models, including the standard version. Other possible upgrades include brighter displays, higher refresh rates and better battery life. Pricing is expected to sit slightly above the iPhone 17, which is priced at Rs 82,900 for the 256GB version. The iPhone 18 could start at somewhere between Rs 84,000 and Rs 86,000.
Samsung Galaxy S26, S26 Plus And S26 Ultra Launch Likely On THIS Date – Expected Price, Camera, Battery & More | Technology News
Samsung Galaxy S26 Series Launch Timeline: Unlike the Samsung Galaxy S24 and S25 lineups, the Samsung Galaxy S26 series is likely to launch at the Galaxy Unpacked event in February 2026. Reports indicate that the launch event will be held in San Francisco, USA, on February 25, 2026. However, Samsung has not yet shared any official information on the launch date. Notably, the S24 and S25 lineups were launched in January 2024 and January 2025, respectively. The Samsung Galaxy S26 series will include at least three models: S26, S26 Plus and the S26 Ultra. All are expected to receive a unified camera module instead of the separate cutouts. Rumours are that the S26 Ultra might feature more rounded corners, new colour options, Snapdragon 8 Elite Gen 5 (Qualcomm’s latest flagship chipset), and several new AI features. The Galaxy S26 series is expected to continue with QHD+ AMOLED displays. All models will likely support a 120Hz refresh rate. The Samsung Galaxy S26 Ultra could feature a large 6.9-inch AMOLED screen. The display is also rumoured to include a built-in privacy feature. Add Zee News as a Preferred Source The Galaxy S26 Plus may also get a 6.9-inch AMOLED display. For comparison, the Galaxy S25 Plus has a 6.7-inch screen. The standard Galaxy S26 is expected to stick with a 6.2-inch display. Samsung may introduce some camera upgrades across the Galaxy S26 lineup. The Galaxy S26 and S26 Plus are tipped to receive a new 12-megapixel 3x telephoto camera. This would replace the older 10-megapixel sensor. The 50-megapixel main camera and 12-megapixel ultra-wide lens are likely to remain the same. The Galaxy S26 Ultra could also get an updated 12-megapixel 3x telephoto camera. The rest of its camera setup is expected to stay unchanged. This includes a 200-megapixel main sensor, a 50-megapixel ultra-wide camera and a 50-megapixel 5x telephoto lens. Battery capacity may largely stay the same. The Galaxy S26 Ultra is said to retain a 5,000mAh battery. However, it could support faster 60W charging. The Galaxy S26 Plus may continue with a 4,900mAh battery. However, the standard Galaxy S26 might pack a larger battery with 4,300mAh capacity, up from 4,000mAh on S25. Prices are expected to remain similar to last year. For reference, the Galaxy S25 was launched in India at Rs 80,999. The S25 Plus and the Galaxy S25 Ultra were priced from Rs 99,999 and Rs 1,29,999, respectively.
Realme Pad 3 5G Confirmed: Massive 12,200mAh Battery, 2.8K Display & More – Launching On… | Technology News
Realme Pad 3 5G: Realme has confirmed that the Pad 3 5G tablet will launch in India alongside the upcoming Realme 16 Pro series. The tablet is already listed on the company’s website. This reveals key details about its battery, display and design. The new model will replace the Realme Pad 2, which was launched in July 2023. The Realme Pad 3 5G will come with a much bigger battery than its predecessor. Additionally, it will offer a higher-resolution display than the Pad 2, which features an 11.5-inch 2K screen. The microsite for the Realme 16 Pro series now confirms that the Realme Pad 3 5G will be unveiled at the same event. The launch is scheduled for January 6, 2026, at 12 pm IST. The tablet has also appeared on the Realme India online store. This confirms that it will be sold through the platform and hints at a few core features. Realme has confirmed that the Pad 3 5G will pack a 12,200mAh Titan Battery. It will feature a Book-View Display with slim bezels and a 2.8K resolution. The tablet will sport a dual rear camera setup with an LED flash. The cameras sit inside a square-shaped module. Realme branding is placed at the centre of the back panel. The tablet will be available in at least black and gold colour options. More details are expected to be revealed soon. Add Zee News as a Preferred Source Rumors suggest that the Realme Pad 3 5G is likely to be powered by a MediaTek Dimensity 7300-Max chipset. It might run on Android 16-based Realme UI 7.0. The tablet will reportedly be just 6.6mm thick. The display could support a 120Hz refresh rate, 296ppi pixel density and 1.07 billion colours. The Realme Pad 3 5G will succeed the Realme Pad 2, which features an 11.5-inch 2K display with up to a 120Hz refresh rate. It offers 450 nits of peak brightness and an 85.2 percent screen-to-body ratio. The Pad 2 is powered by a MediaTek Helio G99 chip and offers up to 8GB RAM and 256GB storage. It packs an 8,360mAh battery with 33W fast charging support.