BGMI 4.2 Update: Krafton India is set to release the BGMI 4.2 update on January 15, 2026. The update will be rolled out in phases to avoid server overload. Android and iOS users will receive the update on the same day, but at different time windows. For Android users, the update will begin appearing on the Google Play Store from 6:30 AM IST, with wider availability expected by 11:30 AM to 12:30 PM. iOS users can expect the update between 8:30 AM and 9:30 AM IST, with the rollout completing by 12:30 PM. The update size is expected to be between 0.9GB and 1.5GB. The new update introduces a new Primewood Genesis theme and, in collaboration with Royal Enfield, players will be able to ride the Bullet 350 and Continental GT 650 in the battlegrounds. The new Primewood Genesis theme features nature-inspired environments resembling magical forests. Players will encounter special plants, high-loot zones, and interactive elements such as the Tree of Life, which can be used as cover. Some plants provide weapons and supplies, while poisonous flowers pose a threat during combat. Add Zee News as a Preferred Source
YouTube Earnings In India: How Much Creators Earn Per 1,000 Views, Top Creator Secrets, And Monetization Rules Revealed | Technology News
YouTube Earnings Per 1000 Views In India: YouTube is often seen as a platform for entertainment and time pass. However, behind popular videos, many creators are building successful careers. In India, several YouTubers earn crores of rupees by creating content that attracts large and loyal audiences. Their success does not come overnight. It starts with regular uploads and a clear understanding of what viewers want to watch. Creators working in gaming, comedy, tech, and education slowly grow their reach. Over time, they earn not only through advertisements but also through brand deals and their own products, which becomes the real source of massive income. In this article, we explain what they do to make their videos reach millions of views so that you can also run your YouTube channel in a similar way and earn crores. What are YouTube earnings per 1,000 views in India? Add Zee News as a Preferred Source YouTube Earnings Start With Google AdSense For most creators, YouTube earnings start with Google AdSense. As videos get more views and watch time increases, ad revenue grows. But successful creators know that AdSense is only the beginning. They treat it as a foundation while exploring other ways to scale their income. YouTube Earnings: Brand Deals And Sponsorships The real money for top YouTubers comes from brand deals and sponsorships. Channels with a loyal and engaged audience attract brands willing to pay anywhere from lakhs to crores for a single video. In this case, audience trust and quality matter more than just follower count. YouTube Earnings: Personal Brand By Selling Online Courses Top YouTubers do more than just make videos. They build a personal brand by selling online courses, e-books, or merchandise. The trust they earn from their audience turns these ventures into steady income and helps them expand beyond YouTube, ensuring long-term success. YouTube Earnings: Affiliate Marketing Affiliate marketing has become a key income source for many creators. They place product links in video descriptions or comments and earn a commission whenever viewers make a purchase. This method is particularly effective in niches such as tech, beauty, fitness, and education, allowing creators to earn consistently while providing their audience with useful products and recommendations. YouTube Earnings: Secret Formula To Make Crores Successful creators do not just follow trends, they set them. They have a strong understanding of SEO, thumbnails, titles, and audience behavior. Regular uploads, consistent timing, and content that provides real value are their most powerful tools. These strategies help them stand out on a crowded platform. In conclusion, there is no shortcut to earning crores on YouTube. With the right planning and approach, it is possible. Creators who treat YouTube as a business focus on trust and value, not just views, and that is what drives long-term success. YouTube Earnings Per 1000 Views In India In India, YouTube earnings per 1,000 views, called RPM, usually range from Rs 50 to Rs 200 after YouTube takes its 45% share. Earnings depend on the niche, audience location, ad engagement, and video length. Finance or tech videos often earn more, around Rs 100 to 300 per 1,000 views. Not all views generate money because only views with ads count, and views from foreign audiences can increase earnings. YouTube Monetization Rules To earn money on YouTube, a channel must meet basic eligibility requirements. It needs at least 1,000 subscribers and either 4,000 valid public watch hours in the past 12 months or 10 million valid views on Shorts within the last 90 days. Meeting these thresholds allows creators to apply for monetization and start earning revenue from their content.
Worried About Your Smartphone’s Battery Health? Check Which Charger Is Best: 30W, 60W, Or 90W–Does Charging Speed Affect Battery Life? | Technology News
Smartphone Battery Health: With smartphones supporting fast charging, charger ratings like 30W, 65W, or 90W have become common factors in phone charging and battery health discussions. Users often argue which is best and think a higher-watt charger is always better, but watt simply refers to the amount of power a charger can deliver. In technical terms, a watt (W) is a unit of power that shows how much energy is transferred per second. In chargers, wattage is calculated by multiplying voltage (V) and current (A). Higher wattage usually means the charger can supply more power, which leads to faster charging–if the phone supports it. 30W vs 90W Chargers: What’s The Difference? Add Zee News as a Preferred Source A 30W charger is commonly used with mid-range and some flagship smartphones. It offers balanced charging speed and generates less heat. On the other hand, a 90W charger is designed for phones that support ultra-fast charging, usually premium models. These chargers can refill the battery much faster, sometimes reaching 50 percent in under 15 minutes. However, if a phone supports only 30W charging, using a 90W charger will not force extra power into the device. The phone will draw only the power it is designed to handle. (Also Read: Amazon Great Republic Day Sale 2026: From iPhone Air To OnePlus 15R; Check Top Deals On Budget-Friendly Smartphones) Does Higher Wattage Harm Battery? A common concern among most people is whether fast charging affects battery life. Lithium-ion batteries, used in smartphones, are sensitive to heat. Higher-watt charging can generate more heat, especially during the early stages of charging. Over time, repeated exposure to high temperatures can reduce battery health. However, modern smartphones are built with battery management systems that control power flow, temperature, and charging speed to prevent damage. Many phones slow down charging once the battery reaches around 80 percent to protect long-term battery life. Charging Speed vs Battery Health Faster charging is convenient and time-saving, but slower charging can be good for battery health. Using a lower-watt charger, such as 20W or 30W, produces less heat and may help maintain battery health over several years. However, use of fast chargers does not harm battery health if the device is well-designed. Smartphone manufacturers test batteries to handle fast charging within safe limits. Which Charger Should You Use? The best charger is the one recommended by the phone manufacturer only. Using a certified charger that matches your phone’s supported wattage ensures safe and efficient charging. For daily use, moderate-watt chargers are ideal, while high-watt chargers are useful when fast charging is needed.
Watchdog Asks X To Set Up Minor Protection Measures For AI Chatbot Grok | Technology News
Seoul: South Korea’s media watchdog said on Wednesday it has asked U.S.-based social media platform X to come up with measures to protect minor users from sexual content generated by the artificial intelligence (AI) model Grok. The Korea Media and Communications Commission (KMCC) said it delivered the request to the operator amid growing concerns over deepfake sexual content that can be generated by AI platforms, reports Yonhap news agency. “We have asked the operator of X to prevent potential illegal activities on Grok and submit measures to protect teenagers from harmful content, including limiting or managing their access,” the KMCC said in a release. Add Zee News as a Preferred Source Under the South Korean law, operators of social network platforms, including X, are required to designate an official in charge of minor protection and submit an annual report, the commission said. The KMCC said the request was made in line with the regulation, noting it has pointed out that creating, circulating or saving sexual deepfake content generated without consent is subject to criminal punishment. “We intend to proactively support the sound and safe development of new technologies,” KMCC Chairperson Kim Jong-cheol said in a release. “As for side effects and negative impacts, we plan to introduce reasonable regulations and revamp policies to prevent the circulation of illegal information, including sexual abuse content, and require AI service providers to protect minors,” Kim said. Meanwhile, Elon Musk-run X Corp has acknowledged the presence of obscene imagery on its platform, mostly created by its Grok AI, stating that it will comply with Indian laws and remove such content. The Indian government had directed X to conduct a comprehensive review of Grok’s technical and governance frameworks to prevent the generation of unlawful content. It said Grok must enforce strict user policies, including suspension and termination of violators. All offending content should be immediately removed without tampering with evidence, it said.
Mastering LLM Tool Calling: The Complete Framework for Connecting Models to the Real World
Mastering LLM Tool Calling: The Complete Framework for Connecting Models to the Real World – MachineLearningMastery.com Mastering LLM Tool Calling: The Complete Framework for Connecting Models to the Real World – MachineLearningMastery.com
iPhone Face ID Not Working Properly? Try This Apple’s Hidden iOS Setting, How It Works And Follow THESE Steps To Set It Up | Technology News
iPhone Face ID Not Working: Unlocking your iPhone should be quick and easy. You pick up the phone, look at the screen, and expect it to open right away. But sometimes Face ID feels slow or does not work at all. It may fail in low light, when you wear glasses, or when you change your hairstyle. This can be annoying, especially when you are in a hurry. Many users think this happens because of a phone problem or old software. However, the reason may be much simpler. There is a hidden setting in iOS that many people do not know about, and turning it on can help Face ID recognise your face better and work faster. Apple offers this option as Alternate Appearance within the Face ID settings. While many users are unaware of its purpose, this feature is designed to improve Face ID reliability, especially for those who face frequent recognition failures in daily use. Apple’s Alternate Appearance Feature: How It Works Add Zee News as a Preferred Source This feature lets users scan their face a second time in different conditions. Instead of using just one face scan, Face ID learns from two scans, which helps it recognise you more accurately from different angles or looks. It works well if your appearance changes often, such as when you wear glasses, grow a beard, or change hairstyles. However, this setting is not meant for sharing your phone with someone else. Apple clearly says that Alternate Appearance is only for the same user and not for allowing another person to unlock your iPhone. (Also Read: Amazon Great Republic Day Sale 2026: From iPhone Air To OnePlus 15R; Check Top Deals On Budget-Friendly Smartphones) Apple’s Alternate Appearance Feature: How To Set Up Step 1: Open the Settings app on your iPhone and tap on Face ID & Passcode. Step 2: Enter your device passcode to access Face ID settings. Step 3: Scroll down and tap on Set Up an Alternate Appearance under Face ID controls. Step 4: Follow the on-screen instructions by placing your face in the frame and slowly moving your head. Keep the phone about 25 to 30 centimetres away. Step 5: Scan your face in conditions where Face ID usually fails, such as wearing glasses, makeup, or holding the phone at different angles.
Cyberattack At Kyowon ‘Exposes’ Over 9 Million User Accounts To Possible Breach | Technology News
Seoul: South Korean cybersecurity authorities estimate that around 9.6 million accounts may have been affected by a recent cyberattack at Kyowon Group, a local education service provider, informed sources said on Wednesday. The estimate by a government investigation team that includes the Korea Internet & Security Agency comes after Kyowon Group reported a possible breach on Monday, saying it had detected traces of a ransomware attack, reports Yonhap news agency. Kyowon said it became aware of abnormal activities in its internal system Saturday and later identified a possible data breach. The authorities estimate that 600 of the company’s 800 servers fall within the scope of the breach. Add Zee News as a Preferred Source The investigation team estimates Kyowon Group’s eight affiliates held 13 million members in total, a figure that narrows to 5.54 million after removing overlaps. The 9.6-million estimate counts users holding more than one account. As Kyowon Group operates a wide range of businesses, including tutoring, home appliance rentals and funeral services, experts said the number of victims could be substantial. Kyowon Group has yet to confirm whether its members’ personal data was actually leaked. “We have identified indications of a possible data leak, and an investigation is under way with relevant organisations and security institutions to determine whether consumers’ data was actually breached,” Kyowon Group said in a release. “If customer data is confirmed to have been leaked, we will notify users in a transparent manner,” the company added. Meanwhile, more than 150,000 customers of KT Corp., South Korea’s second-largest mobile carrier, have left the company for a different service provider after KT began waiving early termination fees following a major data breach, industry sources said last week. According to the sources, 154,851 KT users switched to rival carriers between Dec. 31 and Thursday, averaging more than 17,000 departures per day.SK Telecom, the country’s largest carrier, which implemented a similar penalty waiver in July after a large-scale data leak, lost about 160,000 users following its own incident.
A Gentle Introduction to Language Model Fine-tuning
import dataclasses import tokenizers import torch import torch.nn as nn import torch.nn.functional as F from torch import Tensor # Model architecture same as training script @dataclasses.dataclass class LlamaConfig: “”“Define Llama model hyperparameters.”“” vocab_size: int = 50000 max_position_embeddings: int = 2048 hidden_size: int = 768 intermediate_size: int = 4*768 num_hidden_layers: int = 12 num_attention_heads: int = 12 num_key_value_heads: int = 3 class RotaryPositionEncoding(nn.Module): “”“Rotary position encoding.”“” def __init__(self, dim: int, max_position_embeddings: int) -> None: super().__init__() self.dim = dim self.max_position_embeddings = max_position_embeddings N = 10_000.0 inv_freq = 1.0 / (N ** (torch.arange(0, dim, 2) / dim)) inv_freq = torch.cat((inv_freq, inv_freq), dim=–1) position = torch.arange(max_position_embeddings) sinusoid_inp = torch.outer(position, inv_freq) self.register_buffer(“cos”, sinusoid_inp.cos()) self.register_buffer(“sin”, sinusoid_inp.sin()) def forward(self, x: Tensor) -> Tensor: batch_size, seq_len, num_heads, head_dim = x.shape device = x.device dtype = x.dtype cos = self.cos.to(device, dtype)[:seq_len].view(1, seq_len, 1, –1) sin = self.sin.to(device, dtype)[:seq_len].view(1, seq_len, 1, –1) x1, x2 = x.chunk(2, dim=–1) rotated = torch.cat((–x2, x1), dim=–1) return (x * cos) + (rotated * sin) class LlamaAttention(nn.Module): “”“Grouped-query attention with rotary embeddings.”“” def __init__(self, config: LlamaConfig) -> None: super().__init__() self.hidden_size = config.hidden_size self.num_heads = config.num_attention_heads self.head_dim = self.hidden_size // self.num_heads self.num_kv_heads = config.num_key_value_heads assert (self.head_dim * self.num_heads) == self.hidden_size self.q_proj = nn.Linear(self.hidden_size, self.num_heads * self.head_dim, bias=False) self.k_proj = nn.Linear(self.hidden_size, self.num_kv_heads * self.head_dim, bias=False) self.v_proj = nn.Linear(self.hidden_size, self.num_kv_heads * self.head_dim, bias=False) self.o_proj = nn.Linear(self.num_heads * self.head_dim, self.hidden_size, bias=False) def forward(self, hidden_states: Tensor, rope: RotaryPositionEncoding) -> Tensor: bs, seq_len, dim = hidden_states.size() query_states = self.q_proj(hidden_states).view(bs, seq_len, self.num_heads, self.head_dim) key_states = self.k_proj(hidden_states).view(bs, seq_len, self.num_kv_heads, self.head_dim) value_states = self.v_proj(hidden_states).view(bs, seq_len, self.num_kv_heads, self.head_dim) attn_output = F.scaled_dot_product_attention( rope(query_states).transpose(1, 2), rope(key_states).transpose(1, 2), value_states.transpose(1, 2), is_causal=True, dropout_p=0.0, enable_gqa=True, ) attn_output = attn_output.transpose(1, 2).reshape(bs, seq_len, self.hidden_size) return self.o_proj(attn_output) class LlamaMLP(nn.Module): “”“Feed-forward network with SwiGLU activation.”“” def __init__(self, config: LlamaConfig) -> None: super().__init__() self.gate_proj = nn.Linear(config.hidden_size, config.intermediate_size, bias=False) self.up_proj = nn.Linear(config.hidden_size, config.intermediate_size, bias=False) self.act_fn = F.silu self.down_proj = nn.Linear(config.intermediate_size, config.hidden_size, bias=False) def forward(self, x: Tensor) -> Tensor: gate = self.act_fn(self.gate_proj(x)) up = self.up_proj(x) return self.down_proj(gate * up) class LlamaDecoderLayer(nn.Module): “”“Single transformer layer for a Llama model.”“” def __init__(self, config: LlamaConfig) -> None: super().__init__() self.input_layernorm = nn.RMSNorm(config.hidden_size, eps=1e–5) self.self_attn = LlamaAttention(config) self.post_attention_layernorm = nn.RMSNorm(config.hidden_size, eps=1e–5) self.mlp = LlamaMLP(config) def forward(self, hidden_states: Tensor, rope: RotaryPositionEncoding) -> Tensor: residual = hidden_states hidden_states = self.input_layernorm(hidden_states) attn_outputs = self.self_attn(hidden_states, rope=rope) hidden_states = attn_outputs + residual residual = hidden_states hidden_states = self.post_attention_layernorm(hidden_states) return self.mlp(hidden_states) + residual class LlamaModel(nn.Module): “”“The full Llama model without any pretraining heads.”“” def __init__(self, config: LlamaConfig) -> None: super().__init__() self.rotary_emb = RotaryPositionEncoding( config.hidden_size // config.num_attention_heads, config.max_position_embeddings, ) self.embed_tokens = nn.Embedding(config.vocab_size, config.hidden_size) self.layers = nn.ModuleList([ LlamaDecoderLayer(config) for _ in range(config.num_hidden_layers) ]) self.norm = nn.RMSNorm(config.hidden_size, eps=1e–5) def forward(self, input_ids: Tensor) -> Tensor: hidden_states = self.embed_tokens(input_ids) for layer in self.layers: hidden_states = layer(hidden_states, rope=self.rotary_emb) return self.norm(hidden_states) class LlamaForPretraining(nn.Module): def __init__(self, config: LlamaConfig) -> None: super().__init__() self.base_model = LlamaModel(config) self.lm_head = nn.Linear(config.hidden_size, config.vocab_size, bias=False) def forward(self, input_ids: Tensor) -> Tensor: hidden_states = self.base_model(input_ids) return self.lm_head(hidden_states) def apply_repetition_penalty(logits: Tensor, tokens: list[int], penalty: float) -> Tensor: “”“Apply repetition penalty to the logits.”“” for tok in tokens: if logits[tok] > 0: logits[tok] /= penalty else: logits[tok] *= penalty return logits @torch.no_grad() def generate(model, tokenizer, prompt, max_tokens=100, temperature=1.0, repetition_penalty=1.0, repetition_penalty_range=10, top_k=50, device=None) -> str: “”“Generate text autoregressively from a prompt. Args: model: The trained LlamaForPretraining model tokenizer: The tokenizer prompt: Input text prompt max_tokens: Maximum number of tokens to generate temperature: Sampling temperature (higher = more random) repetition_penalty: Penalty for repeating tokens repetition_penalty_range: Number of previous tokens to consider for repetition penalty top_k: Only sample from top k most likely tokens device: Device the model is loaded on Returns: Generated text ““” # Turn model to evaluation mode: Norm layer will work differently model.eval() # Get special token IDs bot_id = tokenizer.token_to_id(“[BOT]”) eot_id = tokenizer.token_to_id(“[EOT]”) # Tokenize the prompt into integer tensor prompt_tokens = [bot_id] + tokenizer.encode(” “ + prompt).ids input_ids = torch.tensor([prompt_tokens], dtype=torch.int64, device=device) # Recursively generate tokens generated_tokens = [] for _step in range(max_tokens): # Forward pass through model logits = model(input_ids) # Get logits for the last token next_token_logits = logits[0, –1, :] / temperature # Apply repetition penalty if repetition_penalty != 1.0 and len(generated_tokens) > 0: next_token_logits = apply_repetition_penalty( next_token_logits, generated_tokens[–repetition_penalty_range:], repetition_penalty, ) # Apply top-k filtering if top_k > 0: top_k_logits = torch.topk(next_token_logits, top_k)[0] indices_to_remove = next_token_logits < top_k_logits[–1] next_token_logits[indices_to_remove] = float(“-inf”) # Sample from the filtered distribution probs = F.softmax(next_token_logits, dim=–1) next_token = torch.multinomial(probs, num_samples=1) # Early stop if EOT token is generated if next_token.item() == eot_id: break # Append the new token to input_ids for next iteration input_ids = torch.cat([input_ids, next_token.unsqueeze(0)], dim=1) generated_tokens.append(next_token.item()) # Decode all generated tokens return tokenizer.decode(generated_tokens) checkpoint = “llama_model_final.pth” # saved model checkpoint tokenizer = “bpe_50K.json” # saved tokenizer max_tokens = 100 temperature = 0.9 top_k = 50 penalty = 1.1 penalty_range = 10 # Load tokenizer and model device = torch.device(“cuda” if torch.cuda.is_available() else “cpu”) tokenizer = tokenizers.Tokenizer.from_file(tokenizer) config = LlamaConfig() model = LlamaForPretraining(config).to(device) model.load_state_dict(torch.load(checkpoint, map_location=device)) prompt = “Once upon a time, there was” response = generate( model=model, tokenizer=tokenizer, prompt=prompt, max_tokens=max_tokens, temperature=temperature, top_k=top_k, repetition_penalty=penalty, repetition_penalty_range=penalty_range, device=device, ) print(prompt) print(“-“ * 20) print(response)
Amazon Great Republic Day Sale 2026: From iPhone Air To OnePlus 15R; Check Top Deals On Budget-Friendly Smartphones | Technology News
Amazon’s Great Republic Day Sale 2026: Amazon’s Great Republic Day Sale 2026 is almost here, and excitement is building among online shoppers. The US-based e-commerce giant is gearing up to take on Flipkart’s Republic Day Sale 2026 with a range of lucrative offers. As the sale begins, buyers can look forward to big discounts on smartphones, turning this shopping season into a great opportunity for tech lovers. The Amazon Great Republic Day Sale 2026 will officially begin on January 16, 2026. Amazon has revealed that smartphones across all price categories, from ultra-premium and premium to mid-range and budget, will be available at reduced prices. Adding to the excitement, popular brands such as Apple, Samsung, OnePlus, Realme, Redmi, and iQOO will feature special deals, giving customers plenty of options to choose from. Notably, the sale will allow buyers to get an instant discount of 10 percent with HDFC Bank credit cards, apart from easy EMI options. From iPhone Air To OnePlus 15: Discounts On Top Smartphones The top smartphones are also available at exciting discounts. The Samsung Galaxy S25 Ultra is priced at Rs 1,19,999, down from Rs 1,29,999, while the iPhone 17 Pro will cost Rs 1,25,400, reduced from Rs 1,34,900. The OnePlus 15 is available for Rs 68,999 instead of Rs 76,999, and the iQOO 15 has dropped to Rs 65,999 from Rs 76,999. Even the iPhone 15 is on sale for Rs 50,249, down from its original price of Rs 59,900. From OnePlus 15R To iQOO Neo 10 5G: Discount On Mid-Range Phones The OnePlus 15R is now available for Rs 44,999, down from Rs 54,999, while the iQOO Neo 10 5G is priced at Rs 33,999, reduced from Rs 38,999. (Also Read: Apple Creator Studio Launched In India: Final Cut Pro On Mac And iPad Users Get Smart Features; Check Price, Student Discount And Availability) Budget Phones On Discount: Nord CE 5 To Redmi Note 15 5G For budget-friendly options, the OnePlus Nord CE 5 is available for Rs 22,999, down from Rs 28,999. The Redmi Note 15 5G has dropped to Rs 20,999 from Rs 26,999, and the Realme Narzo 90x 5G is now priced at Rs 12,749, down from Rs 16,999. Add Zee News as a Preferred Source
Apple Creator Studio Launched In India: Final Cut Pro On Mac And iPad Users Get Smart Features; Check Price, Student Discount And Availability | Technology News
Apple Creator Studio Subscription Price: Apple has launched its new Creator Studio bundle in India. The subscription brings Apple’s creative apps together under a single plan, offering powerful tools for video, music, imaging, and productivity. It is designed for creators of all skill levels. The bundle includes apps such as Final Cut Pro, Logic Pro, and Pixelmator Pro. It is available across Mac, iPad, and iPhone. Adding further, the subscription offers new intelligent features and premium content for Keynote, Pages, and Numbers, with Freeform set to be added later. The Cupertino-based tech giant Apple said the service is built to provide studio-grade creative capabilities on Mac, iPad, and iPhone, while keeping user privacy at its core. Apple Creator Studio: New Features Add Zee News as a Preferred Source Apple said millions of creators already use its devices, and the new Apple Creator Studio builds on this ecosystem by making advanced creative tools easier and more flexible to use. As part of this update, Final Cut Pro on Mac and iPad is getting new smart features that make video editing faster. These include Transcript Search to find footage using text, Visual Search to quickly locate specific visuals, and Beat Detection to help edit videos in sync with music. Apple Creator Studio also improves productivity tools. Subscribers get premium templates, themes, and curated content in Keynote, Pages, and Numbers. New smart features help users create presentations, documents, and spreadsheets faster and more easily. (Also Read: Do You Know When The First Mobile Phone With Unlimited Storage Was Launched? Are Users Still Using It? – Explained) Apple Creator Studio: New AI Features On iPad, Apple is introducing a new AI-powered Montage Maker that can automatically create a dynamic edit from raw footage, helping creators get started within seconds. Music creators are also getting major upgrades with Logic Pro on Mac and iPad, including new AI-driven tools like Synth Player and Chord ID that make composing, producing, and experimenting with music easier. However, the Logic Pro now features a refreshed sound library along with advanced tools that support songwriting, remixing, and music production for video content. Meanwhile, Pixelmator Pro is arriving on iPad for the first time, offering a touch-optimised interface, full Apple Pencil support, and powerful editing tools that work seamlessly across iPad and Mac. Apple Creator Studio: Price In India And Availability The subscription costs Rs. 399 per month, while the yearly plan is priced at Rs. 3,999. College students and teachers can get it at a discounted rate of Rs. 199 per month or Rs. 1,999 per year. Apple is also offering a one-month free trial. Users who purchase a new Mac or an eligible iPad can get up to three months of free access. Apple’s Family Sharing feature allows up to six family members to use the subscription at no extra cost. The service will be available on the App Store starting January 28, 2026.