Pricing Plans
Take advantage of one of our premium subscription packages with favorable terms for your personal growing startup.
- 3 projects
- 30-day version history
- Up to 2 editors
- Unlimited cloud storage
- Unlimited projects
- Unlimited version history
- Custom file/user permissions
- Invite-only private projects
- Org-wide design systems
- Centralized teams
- Private plugins
- Plugin administration
Ready to discuss your project?
Get in touch with us if you have interesting suggestions or need help and any consultation.
Apple iOS 18.2 public beta arrives with new AI features, but some remain waitlisted
Apple has rolled out the public beta of its latest mobile operating system, iOS 18.2, incorporating various AI-driven features previously restricted to developers. This update highlights Apple’s strategic embrace of artificial intelligence, introducing tools such as an AI emoji generator named Genmoji, an image generation app called Image Playground, and integration of ChatGPT with Siri. Additionally, users will be able to utilize visual search capabilities through the cameras of the new iPhone 16.
While the public can begin testing these innovative features, many of them necessitate joining a waitlist for access. These advancements collectively contribute to what Apple has branded as “Apple Intelligence,” cleverly abbreviated to “AI.” By delivering a range of functionalities powered by large language model technologies, Apple aims to enhance user interaction with Siri, assist with writing and proofreading, and enable creative image generation.
One of the most striking features is the advanced capabilities of Siri, now supplemented by ChatGPT. Users can request Siri to retrieve information from various apps or perform actions based on visual content displayed on their screens. This integration is not limited to built-in apps, as third-party developers will also have access to Apple Intelligence, enabling them to enrich their applications with these groundbreaking features. Initial access to these AI technologies will focus on categories such as Books, Browsers, Cameras, Document Readers, File management, Journals, Mail, Photos, Presentations, Spreadsheets, Whiteboards, and Word Processors.
Among the new features, Image Playground stands out as a dedicated app for generating images from user prompts. Genmoji serves a similar purpose for creating personalized emojis, while Image Wand can convert rough sketches noted by users into polished AI-generated visuals. Users are likely to be enthusiastic about these features, engaging with the capabilities of generating bespoke visuals and emojis.
The integration of ChatGPT into Siri amplifies its functionality, enabling the virtual assistant to assist in writing tasks, answering questions, and image creation. This partnership offers a reciprocal benefit where ChatGPT can leverage the vast audience of iPhone users for exposure, while Apple stands to gain a more intelligent and versatile Siri.
Furthermore, iPhone 16 users can utilize a new “Camera Control” button to access Visual Intelligence, providing the ability to search and identify objects and locations in the real world directly through their camera.
However, potential users should be aware of the requirement to join a waitlist for certain features. Access to Apple Intelligence must be activated manually, and users might have to sign up for other image generation capabilities, which may delay access for an unknown duration—potentially stretching from days to weeks. The reasons for this delay include not only AI safety concerns—illustrated by users creating inappropriate content with Genmoji—but also Apple’s intention to safely manage the rollout of features at scale.
Prior to the public beta of iOS 18.2, Apple already introduced several AI capabilities in iOS 18.1, including writing tools and notification summaries powered by Apple Intelligence. Additionally, Siri received a visual overhaul with glowing screen edges for a more engaging user experience.
Alongside iOS 18.2, Apple released its first public betas for iPadOS 18.2, macOS Sequoia 15.2, and tvOS 18.2, reflecting a cohesive approach to integrating AI across its ecosystem of devices. With these updates, Apple emphasizes its commitment to enhancing user experiences while cautiously managing the complexities associated with AI technology deployment.
As testing continues during the beta phase, users are expected to explore and provide feedback on these new functionalities, shaping the future of Apple’s AI initiatives and its potential integration in daily use. The evolution of Apple Intelligence signifies a pivotal moment for Apple’s software, aligning with broader trends in the tech industry, where artificial intelligence plays an increasingly crucial role in product design and user interaction.
What Trump’s victory could mean for AI regulation
The recent U.S. election cycle concluded with Donald Trump securing the presidency as the 47th president, along with the Republican Party gaining significant governmental control, including potentially the House of Representatives. This shift in power portends considerable changes in AI policymaking, as Trump has expressed his intentions to dismantle the existing frameworks put in place by the Biden administration, particularly focusing on the AI Executive Order (EO) he plans to overturn immediately upon taking office.
Biden’s AI Policy
The Biden administration’s approach to AI regulation was established through an executive order issued in October 2023. This decision arose in response to Congress’s failure to enact comprehensive regulations on AI, taking a route that emphasizes voluntary compliance. Biden’s AI EO encompasses various aspects of AI usage, including advancements in healthcare and measures to address intellectual property theft risks. However, two key provisions have drawn strong criticism, particularly from Republican lawmakers.
These provisions require AI developers to inform the government about their model training processes and results from security vulnerability tests. Additionally, the National Institute of Standards and Technology (NIST) is tasked with providing guidance to help companies identify and remedy flaws in their AI systems, including issues related to bias. Critics, especially aligned with Trump, argue these requirements are excessively burdensome and could deter innovation. They interpret the EO as an overreach, exploiting the Defense Production Act to impose stringent regulations on AI.
Trump’s allies have vocally opposed the administration’s policies on multiple fronts, suggesting they threaten to stifle innovation and entrench existing large tech companies. Notably, Trump pledged to cancel the AI EO and prohibit the use of AI for censoring American citizens, indicating a significant pivot from the current administration’s approach to AI governance.
Potential Changes Under Trump
The specifics of what a Trump-era AI policy might look like remain unclear. His previous executive orders generally focused on promoting AI research and development while prioritizing civil liberties, a contrast to the regulatory measures set forth by Biden. Trump has made broad promises about fostering AI development that emphasizes free speech but has not detailed specific policies.
Some Republicans advocate that NIST should concentrate on AI’s safety risks concerning national security without imposing new restrictions that could interfere with ongoing research and development. The fate of the AI Safety Institute, established under the Biden EO to study AI risks, is uncertain, despite a coalition of industry stakeholders calling for legislative measures to safeguard its existence.
Despite Trump’s acknowledgment of AI’s potential dangers and power demands, experts believe significant regulatory measures may not materialize. Political analyst Sarah Kreps suggests that even under a Trump presidency, attacks on the Biden EO may not escalate to the levels needed to abolish it entirely.
State-Level Initiatives and Legislative Changes
Under a Trump presidency, regulatory authority could become decentralized, leading to enhanced state-level actions. Several states have already initiated legislative measures addressing AI concerns. Tennessee implemented protections for voice artists against AI cloning, while Colorado adopted a risk-based regulatory framework concerning AI usage. California’s recent legislation mandates transparency in AI training details, reflecting a growing trend for states to fill regulatory gaps left by federal policies.
The potential for increased state regulation could lead to a patchwork of rules governing AI, complicating compliance for companies operating across state lines. State legislators have introduced nearly 700 pieces of AI-related legislation this year alone, indicating a robust interest in addressing technology’s evolving risks at the local level.
Impact of Protectionist Policies and Trade Considerations
Experts anticipate that Trump’s presidency may result in more stringent protectionist policies concerning AI technology, particularly regarding exports to adversarial nations like China. Tighter export controls could exacerbate existing tensions, ultimately stymying global collaborative efforts to address AI regulation. This trend could allow authoritarian regimes to leverage AI in ways that threaten global stability, counteracting the calls for standardized international norms.
Trade policies could significantly influence the AI industry, as proposed tariffs — such as Trump’s suggested 60% tariff on Chinese-manufactured products — could drastically affect funding for AI research and development. These economic pressures might constrain access to crucial technologies and resources necessary for innovation in the sector.
Conclusion
The transition to a Trump presidency portends a potential shift away from the regulatory framework established by the previous administration, emphasizing a lighter regulatory touch and potentially leading to a fragmented approach to AI governance, with states stepping up where the federal government retreats. Policies concerning trade and technology may have profound implications for the AI sector, with potential tariffs and protectionist measures reshaping the landscape.
While the political dialogue surrounding AI governance has become increasingly partisan, experts urge that the conversation must transcend party lines to comprehensively address AI’s inherent risks and opportunities. The need for collaborative efforts to create effective governance solutions remains critical, as the impact of AI grows in significance across varied sectors of society and the economy.
What Trump’s win might mean for Elon Musk
Elon Musk, the billionaire CEO known for leading Tesla, SpaceX, Neuralink, and xAI, recently pivoted his political stance to support President-elect Donald Trump in the wake of the election. This marked a significant turn from his previous criticisms of Trump during his first term—especially following Trump’s withdrawal from the Paris climate accords, which prompted Musk to resign from his advisory roles during that time.
Despite Trump’s historically anti-electric vehicle (EV) stance and skepticism towards climate change, Musk’s newfound support is reflected in his considerable financial contributions, donating over $100 million to a pro-Trump super PAC. The billionaire is now seen as one of Trump’s most influential advisers, especially after Trump publicly acknowledged Musk during his victory speech, calling him a “super genius” and saying, “A star is born, Elon.” In September, Trump even promised Musk a position leading a new Department of Government Efficiency, humorously abbreviated by Musk as DOGE, referencing a meme cryptocurrency he has previously promoted.
This role could potentially allow Musk to initiate sweeping reforms aimed at cutting government spending, as he advocates for a reduction in what he perceives as the overwhelming size and inefficiency of the federal bureaucracy. His approach appears to leverage the techniques that have brought success to Tesla, emphasizing innovation through simplification, such as questioning every requirement and eliminating redundant processes.
In commentary shared while traveling post-election, Musk articulated plans for modernization within government agencies, suggesting a humane transition for government employees to the private sector, paid job searches for displaced workers, and the implementation of term limits. He expressed a desire for necessary regulations without excessive bureaucracy, akin to maintaining a balance between referees and players in a sports game.
The implications of Trump’s win for Musk’s business ventures are significant. For Tesla, where Musk has long navigated government support, the anticipated rollbacks of Biden-era EV policies could create a more favorable business environment. If Trump indeed eliminates EV subsidies, Tesla could maintain its competitive edge, as it is well-established compared to emerging rivals. Analysts have reported a positive market reaction, with Tesla’s stock seeing notable gains since Trump’s victory.
SpaceX appears to be poised for advancements during the Trump administration, with Trump previously supporting the organization’s missions to Mars. Under Trump’s first term, notable strides were made in American space policy, including the inception of the U.S. Space Force. Trump reaffirmed his commitment to space exploration during campaign appearances, hinting at accelerated timelines for lunar and Martian missions that could align with SpaceX’s launch schedules. Regulatory reform within the Federal Aviation Administration is also anticipated, as Musk has criticized the agency’s perceived delays hampering commercial innovation.
However, the outlook for Musk’s social media company, X, and his AI initiative, xAI, is less certain. X has seen a significant decrease in advertising following Musk’s takeover, with major brands pulling out due to concerns over content impacting their image. Despite this, Musk is hopeful that Trump’s electoral success could restore advertisers’ confidence, leading to a reinstatement of financial support for the platform.
Throughout his entrepreneurial endeavors, Musk has frequently clashed with federal regulatory bodies, including the Federal Trade Commission and the Securities and Exchange Commission, maintaining a combative stance towards oversight that he views as burdensome. Following Trump’s victory, Musks’ xAI initiative might benefit from a regulatory landscape that prioritizes minimal interference, allowing for a more rapid development in artificial intelligence.
Economic implications could arise from Trump’s trade policies, particularly with suggested tariffs that may affect the broader AI and tech sectors. Ultimately, Musk’s journey through political alliances and business strategies exhibits a complex interplay between political power, corporate influence, and regulatory dynamics in shaping future landscapes for innovation and industry in America.
The other election night winner: Perplexity
On a pivotal election night, two AI startups, xAI and Perplexity, sought to showcase the capabilities of their AI chatbots in real-time election reporting amidst one of the most significant political events in the United States. Led by Elon Musk, xAI’s Grok faced immediate criticism for providing incorrect information about race outcomes even before the polls closed. In stark contrast, Perplexity emerged as a reliable source, providing real-time insights, detailed maps, and links to reputable resources throughout the night.
Perplexity’s proactive approach paid off after it introduced a dedicated election information hub just before the elections. This hub featured real-time election maps populated with data from trusted sources like Democracy Works and the Associated Press, mirroring Google’s election mapping strategy, which was a notable departure from many AI chatbots that opted to refrain from answering election-related queries due to concerns over misinformation.
The reluctance of most AI companies to engage with the election reflected a cautious strategy born from past experiences with erroneous “hallucinations” in AI outputs. For instance, OpenAI’s ChatGPT Search, which recently launched, was unable to provide dependable answers regarding the election, with the company directing users to Vote.org instead. This underlined an awareness that its AI model, still in an experimental phase, was not yet ready for the scrutiny of a live election scenario.
On the other hand, Perplexity had been testing its Google competitor since December 2022 and believed it had gathered sufficient data to handle the demands of election reporting effectively. Its performance on election night showcased how the startup actively competed with traditional media outlets for audience attention during a moment of high stakes.
Although Perplexity had established agreements with organizations such as Democracy Works and the Associated Press to power its election-related features, it also sourced live coverage from other media outlets like CBS, CNN, and BBC without any clear revenue-sharing arrangements. This sparked concerns about whether Perplexity was in direct competition with these media companies, especially since it garnered significant traffic on election night.
Regarding the technical aspects, Perplexity’s election hub featured visually appealing charts and maps, elements that users found accessible and easy to navigate. The platform displayed a continuously updated electoral map that indicated the status of key races, albeit with occasional bugs, which the company addressed promptly in response to user reports. This real-time updating was essential for maintaining user engagement and trust.
While the charts were rooted in traditional electoral reporting techniques, Perplexity also integrated AI-driven features to supplement its offerings. Users could ask real-time questions about the presidential race, and though the AI’s answers weren’t as polished or insightful as commentators on mainstream media platforms, they generally provided accurate and relevant information with few inaccuracies. Responses rarely missed the mark, showcasing Perplexity’s commitment to maintaining timeliness and correctness.
For example, when users inquired about updates in “Blue Wall” states or the ballot counts in swing states, Perplexity occasionally provided incorrect references, yet overall, it succeeded in giving timely replies, something many competitors struggled with during such a critical event.
As the 2024 election season marked a unique intersection of technology and democracy, it became evident that AI chatbots would play an increasingly significant role in providing election information. While Perplexity’s successful navigation of election night has set it ahead of competitors in the race for reliable election reporting, the ongoing challenge for AI companies will be ensuring accuracy against the backdrop of complex, fast-evolving democratic processes.
Ultimately, Perplexity’s efforts on election night exemplified the potential of AI to offer accurate, real-time updates, suggesting a bright future within the realm of AI-assisted information dissemination during elections. The race is on for AI startups to refine their technologies, learn from experiences such as this, and provide trustworthy services that align with the critical interests of the public and democratic processes.
OpenAI acquired Chat.com | TechCrunch
OpenAI has recently acquired the domain name Chat.com, enhancing its portfolio of valuable online assets. As of now, visitors to Chat.com are redirected to OpenAI’s popular AI chatbot, ChatGPT. An official spokesperson for OpenAI confirmed this acquisition in an email.
Chat.com is a notable domain within the internet landscape, having been registered back in September 1996, which makes it one of the older domain names still in existence. In a notable transaction last year, the domain was purchased by Dharmesh Shah, co-founder and CTO of Hubspot, for an impressive sum of $15.5 million. This acquisition marked it as one of the highest publicly reported sales of a domain name in history.
It’s important to note that since its purchase by Shah, the domain has not changed hands until OpenAI’s recent acquisition, which suggests that ChatGPT is not being hosted directly on Chat.com. Therefore, this move does not signify a shift in branding for OpenAI’s chatbot but rather an enhancement of its online presence through a prestigious domain name.
This Week in AI: It’s shockingly easy to make a Kamala Harris deepfake
Summary of AI and Deepfake Media Concerns
The recent advancements in generative AI technologies have paved the way for potential misuse, especially in the realm of deepfakes. A striking example of this is the creation of a convincing audio deepfake of Kamala Harris for just $5 in under two minutes. This alarming simplicity underscores how accessible these tools are for generating disinformation.
The Procedure of Deepfake Creation
The author experimented with Cartesia’s Voice Changer, a tool designed to transform voices by creating digital clones based on 10-second voice recordings. The process involved cloning Harris’ voice using snippets from her campaign speeches and using this clone to produce audio that closely mimicked her speech. Although Cartesia claims to have measures in place to prevent harmful use, it operates mainly on an honor system without verified safeguards, raising concerns about the potential for misuse.
The Impact of Deepfakes on Disinformation
The ease of creating such deepfakes presents real challenges, particularly given that disinformation is proliferating alarmingly. Instances of generative AI-driven disinformation have risen, including bot networks targeting U.S. elections and deepfake audio messages of prominent figures urging citizens to abstain from voting. This content is not merely an issue for tech-savvy audiences; a significant amount of AI disinformation targets various demographics, often going unnoticed due to the sheer volume.
Data from the World Economic Forum revealed that the volume of AI-generated deepfakes surged 900% from 2019 to 2020. However, legal frameworks to counteract such challenges are limited, leaving detection tools a constant work in progress, which leads to an ongoing “arms race” between the development of deepfakes and the methods to identify them.
Solutions and Expert Opinions
At TechCrunch’s Disrupt conference, discussions emerged about potential solutions to tackle the ramifications of deepfakes, including the implementation of invisible watermarks in AI-generated content for easier identification. Some experts also pointed to regulatory measures, such as the U.K.’s Online Safety Act, which might mitigate the flow of disinformation. Yet, there is skepticism about the effectiveness of these strategies as the technology evolves rapidly and outpaces regulatory efforts.
Imran Ahmed, CEO of the Center for Countering Digital Hate, expressed a grim view, suggesting we are entering a phase of perpetual disinformation on the digital landscape. Therefore, the solution may not lie solely in technological fixes but rather in fostering skepticism among the public, especially regarding viral content.
Major Developments in AI Technology
In recent news, OpenAI’s ChatGPT has introduced a new search integration feature, while Amazon has resumed drone deliveries in Phoenix after halting the Prime Air program in California. Notably, former Meta AR lead is joining OpenAI, highlighting the tech industry’s continued focus on integrating advanced technologies.
OpenAI’s Sam Altman acknowledged limitations due to lack of computational resources in product development, while Amazon launched a generative AI feature, X-Ray Recaps, which summarizes TV content using AI capabilities. On the contrary, Anthropic’s recent AI model, Claude 3.5 Haiku, has increased in price and lacks multi-faceted data analysis capabilities.
In a surprising move, Apple acquired Pixelmator, emphasizing its commitment to incorporating more AI functionalities in its imaging applications. Furthermore, Amazon’s CEO hinted at a significant upgrade for the Alexa assistant, potentially enabling it to take independent actions, although delays have been noted in this project.
Research Insights on AI Vulnerabilities
A paper from researchers at Georgia Tech, Stanford, and Hong Kong University reveals alarming vulnerabilities in AI systems that can be manipulated by “adversarial pop-ups.” These deceptive notifications can cause AI agents to engage in malicious activities, with a staggering 86% failure rate in ignoring such pop-ups. The current security measures for AI models remain inadequate, illustrating a pressing need to develop more robust systems to safeguard against vulnerabilities in AI workflows.
New AI Models and Defense Applications
In a strategic move, Meta announced it’s partnering with Scale AI to tailor its Llama AI models for military applications. The newly formed Defense Llama will provide functionalities customized for defense operations, enabling military personnel to pose relevant queries regarding military tactics and defense systems without the restrictions that civilian chatbots face.
Nonetheless, there remains skepticism within the military regarding the reliability and return on investment regarding generative AI deployments, particularly due to security vulnerabilities associated with commercial models.
Innovations in AI Datasets
On a different note, Spawning AI has introduced a public domain image dataset aimed at allowing creators to opt-out of generative AI training. This initiative is crucial in the ongoing discussions surrounding copyrights and ethical use of proprietary data in AI models. Spawning AI claims that their dataset contains fully public domain content, fundamentally differing from typical web-scraped datasets vulnerable to copyright issues.
Conclusion
The rapid evolution of generative AI technologies raises questions about ethical usage, accountability, and the continued vigilance necessary in online spaces to combat disinformation. As AI tools become more prevalent, it underscores the necessity for both technological advancements in detection and increased public awareness to navigate this complex landscape effectively. The commitment to transparency, ethics in AI use, and readiness to adapt to new challenges will be vital as society grapples with these emerging realities.