Boztek

Adoption, migration, optimisation, security and management services designed to deliver business agility.

Read More

Improve your security posture with tailored strategies and front-line defence services.

Read More

Scalable colocation and connectivity within a hyper secure environment.

Read More

Pricing Plans

Take advantage of one of our premium subscription packages with favorable terms for your personal growing startup.

Badge
Starter
Free $
for up to 2 editors and 3 projects
  • 3 projects
  • 30-day version history
  • Up to 2 editors
  • Unlimited cloud storage
Popular
Professional
$12 /mo
for up to 2 editors and 3 projects
  • Unlimited projects
  • Unlimited version history
  • Custom file/user permissions
  • Invite-only private projects
Badge
Organization
$24 /mo
for up to 2 editors and 3 projects
  • Org-wide design systems
  • Centralized teams
  • Private plugins
  • Plugin administration

Ready to discuss your project?

Get in touch with us if you have interesting suggestions or need help and any consultation.

    Amazon attempts to lure AI researchers with $110M in grants and credits

    The ongoing competition among major cloud vendors in the realm of artificial intelligence (AI) is intensifying, with notable developments from Google, Microsoft, and Amazon Web Services (AWS). Google’s custom chip, Trillium, has recently entered preview for training and running AI models, while Microsoft’s Maia chip is expected to debut shortly. However, AWS is making headlines with its own suite of AI chips—Trainium, Inferentia, and Graviton.

    To promote its Trainium chip specifically, AWS is launching a new initiative called Build on Trainium, which aims to support AI research with substantial funding. This program will distribute a total of $110 million to educational institutions, scholars, and students engaged in AI research. As part of this initiative, AWS plans to award up to $11 million in Trainium credits to select universities and offer individual grants of up to $500,000 to other AI researchers.

    AWS is also setting up a “research cluster” that consists of up to 40,000 Trainium chips, which research teams and students can access through self-managed reservations. Gadi Hutt, senior director at AWS’ Annapurna Labs, emphasized that Build on Trainium seeks to provide researchers with the necessary hardware support to advance their work. The program aims to address a significant resource bottleneck in AI academic research, which has been hampered by limited access to computational infrastructure compared to large tech companies. For instance, Meta has acquired over 100,000 AI chips for its models, while Stanford’s Natural Language Processing Group operates with only 68 GPUs.

    Despite the ambitious goals of the Build on Trainium program, skepticism persists regarding AWS’s intentions. Some critics, like Os Keyes, a PhD candidate at the University of Washington, view the initiative as a potential means of influencing academic research funding. Keyes points out that AWS will have the ultimate say in the allocation of grants, raising concerns about the potential commercialization of academic research. Hutt mentioned that the evaluation process would consider research merit and needs; however, details about the selection process remain somewhat opaque. An AWS spokesperson later clarified that a committee of AI experts would review proposals to identify impactful projects.

    Research indicates that corporate-funded AI studies often prioritize commercially viable work over critical analyses of ethical implications. A recent paper highlighted that leading AI firms generate less output addressing AI ethics compared to more traditional research avenues. This trend further raises concerns about the narrowing scope of “responsible” AI research funded by large corporations, as it often lacks diversity in topics.

    Questions remain about whether participants in the Build on Trainium program will become entrenched within the AWS ecosystem. Hutt assured that grant recipients would not be locked into AWS technologies and would only be required to publish their findings and open-source their work on GitHub under a permissive license. This aspect of the initiative aims to uphold transparency and accessibility in research outputs.

    However, the Build on Trainium program may have limited influence in bridging the divide between AI academia and industry. In 2021, government agencies in the U.S., excluding the Department of Defense, allocated only $1.5 billion for academic AI research funding, while AI industry investments worldwide exceeded $340 billion during the same period. The overwhelming majority of individuals earning PhDs in AI tend to gravitate toward private sector roles, often incentivized not only by higher salaries but also by access to critical computational resources and data.

    Furthermore, companies have been increasingly aggressive in recruiting AI faculty and offering substantial grants to PhD students for their research. As a result, industry now accounts for over 90% of the largest AI models produced each year, while the volume of AI research papers with industry co-authors has nearly doubled since 2000.

    In response to the widening funding gap between academia and industry, policymakers have started to pursue remedies. The National Science Foundation, for instance, announced a $140 million investment in 2022 to establish several university-led National AI Research Institutes, aimed at exploring how AI can address challenges such as climate change and educational improvement. Additionally, efforts are underway to create a U.S. National AI Research Resource, a $2.6 billion initiative designed to enhance access to computational resources and datasets for AI researchers and students.

    Despite these efforts, they are relatively minor compared to the extensive corporate programs shaping the current AI landscape. Given the scale and financial clout of large tech companies, the prevailing situation in the AI research funding ecosystem shows little sign of change in the near future. Thus, while AWS’s Build on Trainium program presents an opportunity for advancing AI research, its efficacy in revolutionizing the relationship between academia and industry remains uncertain.

    YouTube is now letting creators remix songs through AI prompting

    In a significant move to enhance musical creativity on its platform, YouTube has introduced a new feature that enables select creators in the U.S. to remix tracks using AI technology. This capability builds on last year’s launch, which allowed creators to generate AI-derived songs utilizing the vocal stylings of popular artists such as Charlie Puth, Charli XCX, Demi Lovato, John Legend, Sia, T-Pain, and Troye Sivan. The new feature entails the ability to describe desired modifications to songs, leading to the creation of a 30-second snippet that can be used within YouTube Shorts, the platform’s short-form video segment.

    Creators who are part of the testing cohort can choose eligible tracks from a selection offered through collaborations with YouTube’s label partners. By selecting the “Restyle a track” option, they can specify how they would like the chosen song to be altered, resulting in a unique AI-generated remix. Importantly, these remixed pieces will be appropriately credited to the original work and labeled as modifications created by AI, ensuring transparency regarding their origins.

    This latest feature represents a continuation of YouTube’s broader initiative to integrate artificial intelligence into creative endeavors and deepen the relationship between artists, creators, and their audiences. YouTube had earlier launched the Dream Track feature in November 2023, which utilizes the Lyrica music generation model developed by DeepMind. Additionally, they had introduced a tool that enables users to create music merely by humming a tune—a reflection of the platform’s commitment to engaging its community in innovative ways.

    YouTube emphasizes that these experiments are designed to explore the possibilities of AI in the musical realm, aiming to empower artists and creators to push their creative boundaries. The company asserts that by offering interactive tools and new experiences, fans can connect with their favorite artists in deeper, more imaginative ways.

    However, navigating the complexities of the music industry is a significant concern, and YouTube has proactively sought to address potential backlash from artists and rightsholders. Prior to launching these AI features, the company announced its intention to compensate artists for the use of their work in AI-driven projects. This commitment materialized through a partnership with Universal Music Group (UMG) to establish a compensation framework for rightsholders, indicating YouTube’s willingness to engage with the music industry stakeholders on fair usage.

    In the landscape of AI-driven music creation and remixing, YouTube is not alone. Other companies are also exploring similar avenues, including an initiative led by former JioSaavn executive Gaurav Sharma, who is developing an app named Hook. This app intends to provide legal avenues for users to remix songs for short video creation, showcasing a growing trend of enabling creative expressions through technology.

    Overall, the integration of AI into music creation and remixes on YouTube reflects a shift towards enhanced accessibility and innovation for creators. This not only enriches the user experience but also fosters an evolving relationship between artists, technology, and audiences, all while striving to respect the rights and contributions of music artists.

    Perplexity brings ads to its platform

    Perplexity, an AI-powered search engine, is set to initiate its advertising program starting this week, marking a significant shift in its strategy to generate revenue. Initially, ads will be available in the U.S. and will be presented as “sponsored follow-up questions.” For instance, questions like “How can I use LinkedIn to enhance my job search?” will be formatted to appear alongside search results. The paid advertisements will be clearly labeled as sponsored and positioned beside the AI-generated answers, ensuring that the user can distinguish them easily.

    Perplexity’s move into advertising is a calculated effort to create a sustainable revenue model, as highlighted in a blog post where the company stated that relying solely on subscriptions has proven insufficient to establish a viable revenue-sharing program with content publishers. As such, incorporating advertising is perceived as essential for generating a steady and scalable revenue stream to support both the platform and its publisher partners. Among the brands and agencies jumping on board with this initiative are notable names like Indeed, Whole Foods, Universal McCann, and PMG.

    The implementation of ads does raise questions about the integrity of the information presented. Perplexity maintains that the answers to these sponsored questions will continue to be generated by its AI algorithms without influence from advertisers. They also stress that advertisers will not have access to any personal information from users, asserting that the advertising format is designed to protect the accuracy, utility, and objectivity of the answers users receive.

    This strategic decision contrasts with OpenAI’s approach, which chose not to implement advertisements in its ChatGPT Search tool. On the other hand, Google has already begun testing ads in its AI search experience, indicating a broader industry trend toward monetization through advertising in AI platforms. However, incorporating ads into AI-generated content has proven challenging for other companies as well; Microsoft, for example, briefly experimented with advertising in its Bing chatbot responses but eventually discontinued the effort due to various difficulties.

    Perplexity aims to position its ad offerings as a premium alternative to those available on Google, targeting educated and affluent consumers. Despite this aspiration, some analysts express concerns regarding the size, reach, and targeting efficiency of Perplexity’s ad capabilities. Additionally, many in the advertising community are wary of the potential for plagiarism, particularly given that Perplexity faces legal challenges regarding its content practices. Notably, News Corp’s Dow Jones and the NY Post have filed lawsuits against Perplexity, accusing it of “content kleptocracy,” as various news organizations have pointed out their content appears to be closely replicated by the platform. Just last month, The New York Times sent a cease-and-desist notice to Perplexity, further complicating its reputational challenges.

    To ensure compliance and support publisher interests, Perplexity has made adjustments in how it cites sources and continues to develop its revenue-sharing program. However, the company also contends that many publishers would prefer it did not exist, often voicing the sentiment that it undermines their traditional ownership of reported facts.

    The pressure to enhance monetization strategies is palpable for Perplexity, especially as it heads into a crucial funding phase. Reports indicate that the company is in the final stages of securing $500 million at a valuation of approximately $9 billion. Nevertheless, its revenue-generating methods remain limited, primarily relying on a premium subscription service known as Perplexity Pro, which provides additional features for a monthly fee of $20.

    In summary, Perplexity’s foray into advertising represents a pivotal shift in its business model, blending AI technology with advertising mechanisms to create a sustainable revenue stream. While the initiative aims to balance user utility with commercial interests, it also faces considerable scrutiny and legal challenges that could impact its trajectory. As the company explores these new monetization avenues, its continued success will depend on addressing user concerns, maintaining content integrity, and navigating the complex landscape of digital advertising and content rights.

    General Catalyst and Khosla Ventures back data mapping startup Lume

    Data Integration Challenges and the Solution by Lume: A Deep Dive

    Data integration is a critical component in various workflows, from customer onboarding to payroll processing. However, the actual data integration process can often be prolonged and labor-intensive. This inefficiency arises due to data being routed into isolated databases and SaaS applications, each of which stores information in different formats. Such fragmentation complicates the task of transferring data from one system to another seamlessly.

    Lume’s Innovative Approach

    Lume is addressing these data integration issues through the use of advanced AI technologies. The company’s platform automates the data mapping process, which involves extracting data from disparate silos and “normalizing” it into a standardized format to facilitate smoother integration into other systems. Notably, Lume provides notifications whenever a data integration process fails—an all-too-common occurrence—and utilizes its AI-driven algorithms to propose a remediation strategy. Additionally, Lume offers an API and a web interface, allowing customers to embed Lume’s capabilities directly into their existing workflows.

    What distinguishes Lume from other data mapping solutions is its focus on complex nested data formats, such as JSON, rather than conventional data structures like spreadsheets or PDFs. Lume enables companies to manage intricate tasks such as arithmetic processing, taxonomy organization, and text manipulation. According to Nicolas Machado, co-founder and CEO of Lume, this focus allows companies to save both time and financial resources compared to outsourcing similar data management projects.

    Addressing a Long-standing Challenge

    Machado emphasized the long-standing difficulty of automating the seamless transfer of data between systems. He highlighted that this manual process has persisted for over sixty years, primarily because data is defined and structured uniquely across different systems. “Moving data seamlessly is a completely manual process," Machado stated. He questioned why this glaring inefficiency had not been automated earlier, pointing out the heterogeneity in data definitions among vendors, companies, and integrations.

    The insight for Lume stemmed from the founders’ personal experiences facing these challenges during their professional journeys. Nicolas Machado, Robert Ross, and Nebyou Zewde first met as undergraduates at Stanford University, all majoring in computer science with an emphasis on artificial intelligence. Their backgrounds include significant contributions to data integration projects in notable tech companies such as Apple and Opendoor. Sensing an opportunity to leverage AI advancements to solve the data integration conundrum, the trio reunite to work on Lume in early 2022.

    Machado reminisced about their efforts, recounting how they initially collaborated at Robert’s apartment during evenings, tackling the complexities surrounding data integration.

    Founding and Growth of Lume

    Lume was established in January 2023 and quickly launched its first product in March of the same year. The company also participated in the W23 batch of Y Combinator, a prestigious startup accelerator. Following their launch, Lume experienced robust inbound interest, rapidly accumulating a diverse customer base that spans both startups and Fortune 500 companies.

    In a significant financial milestone, Lume secured a $4.2 million seed funding round, led by General Catalyst, with participation from Khosla Ventures, Floodgate, Y Combinator, and various angel investors. The enthusiasm around Lume’s approach stems from investors who have personally encountered the challenges that Lume seeks to address. Machado noted how some investors reflect on their careers and express disbelief that such data integration issues persist decades later.

    The funding will primarily be allocated towards hiring additional staff, with plans to double their workforce from five to ten employees by the beginning of next year. The investment will also support ongoing technological advancements for Lume’s platform.

    Competitive Landscape

    Lume is not alone in confronting the pervasive data integration problem; other companies like SnapLogic and Osmos have also entered the space. SnapLogic, for instance, has garnered $371 million in venture funding to address similar challenges. With the demand for effective data integration solutions growing, competition is poised to intensify. However, Machado expresses confidence in Lume’s unique algorithmic approach and the functionality of its API, which seamlessly integrates into existing company workflows, thus setting it apart from competitors.

    Vision for the Future

    Looking ahead, Lume aspires to function as the "glue" that effortlessly connects any two data systems. Machado articulates a vision where Lume facilitates a seamless flow of data across systems, enabling organizations to harness the full potential of their data assets. He draws a parallel between the significance of data and oil, suggesting that just as oil must be processed to unlock its value and fuel machinery, data too must be effectively managed to leverage its full potential.

    In conclusion, Lume represents a forward-thinking solution to the long-standing issues surrounding data integration. By leveraging AI technology to automate data mapping and focusing on complex data formats, Lume offers businesses a modern tool to streamline their data workflows. As the company continues to grow and develop its capabilities, it aims to solidify its position in the competitive landscape of data integration solutions, helping organizations to realize the inherent value of their data.

    ScaleOps aims to take the frustration out of cloud management

    The appetite for cloud services has seen exponential growth, with expenditures more than doubling from 2019 to 2023 and projected to exceed $2 trillion by 2030, according to Goldman Sachs Research. However, as organizations leverage cloud technology, issues with spend management can pose risks to their return on investment (ROI). Yodar Shafrir recognized this challenge during his time at Run:ai, a startup that specializes in workload management, which Nvidia is in the process of acquiring.

    Shafrir observed firsthand the frustrations faced by DevOps teams due to inefficiencies in resource management. Many applications suffer from high costs associated with unused resources, while others crash because of inadequate resource allocation. This constant demand for engineering teams to fine-tune application resources detracts from their core development efforts.

    In an attempt to address these issues, Shafrir met Guy Baron, the former head of R&D at Wix, when he was a customer of Run:ai. Their mutual understanding of the problem led them to co-found a new startup called ScaleOps, which focuses on optimizing cloud resource usage. ScaleOps exists within the crowded niche of cloud spend management tools, also referred to as FinOps. This market includes established players such as CloudHealth (owned by Broadcom), IBM’s Kubecost, Cloudability, and emerging startups like Exostellar, Ternary, CloudZero, and ProsperOps.

    Similar to its competitors, ScaleOps offers automated cloud management solutions tailored to the performance requirements of individual applications. The platform analyzes applications’ resource needs while considering cost and availability, aiming to minimize the overall footprint of cloud services utilized by specific apps. ScaleOps distinguishes itself by being self-hosted, capable of operating across various platforms, including any cloud, on-premises, or air-gapped environments.

    Shafrir, who serves as CEO, emphasized that ScaleOps automates resource optimization to lower waste, enhance performance, and streamline collaboration between DevOps, FinOps, and application teams. This focus on operational efficiency is particularly appealing to companies looking to fine-tune their operations—especially during economic downturns.

    The market position of ScaleOps appears strong, with the company experiencing significant growth in its customer base, which already includes notable brands such as SentinelOne, Cato Networks, and Wiz. Shafrir anticipates adding over 100 brands to the roster by the end of the year. This momentum in customer acquisition has also translated to financial backing. ScaleOps recently secured $58 million in a Series B funding round, bringing its total raised capital to $80 million.

    While Shafrir did not disclose specifics about ScaleOps’ revenue or burn rate, he highlighted that the company adheres to a robust financial strategy aimed at ensuring both sustainability and growth. The mainstreaming of FinOps has played a role in ScaleOps’ success; recent surveys indicate that over 80% of companies have established formal FinOps teams, with another 16% considering creation of such teams. Furthermore, 71% of respondents reported that their investments in FinOps increased over the past year, underscoring the growing importance of operational efficiency amid a broader slowdown in the tech sector.

    The investment from Lightspeed Venture Partners will be allocated towards scaling ScaleOps’ team from its current headcount of 60 to over 200 by 2026, with participation from NFX, Glilot Capital Partners, and Picture Capital in the Series B round.

    As cloud services continue to grow in popularity, the challenges of spend management will remain a focal point for many organizations. Companies like ScaleOps are poised to address these difficulties through innovative solutions designed to optimize resource utilization, ultimately helping clients improve their cloud ROI and navigate changing economic landscapes.

    Here’s how to create a custom emoji with the Apple Intelligence feature ‘Genmoji’

    Apple’s recent updates, namely iOS 18.1 and its successors (iOS 18.2, iPadOS 18.2, and macOS Sequoia 15.2), have introduced a variety of features aimed at enhancing user experience through innovative technology, particularly in the realm of Apple Intelligence. Among these newly introduced functionalities is a feature called Genmoji, which allows users to generate personalized emojis based on written prompts, enhancing how they communicate through digital imagery.

    ### Introduction to Genmoji

    Genmoji was unveiled during WWDC 2024 and represents an evolution in how users can interact with emoji. Integrated into the iPhone’s emoji keyboard, Genmoji offers the ability to create emojis that can range from the whimsical, such as “a sloth wearing a suit and tie,” to more individualized representations based on users’ photographs. This functionality not only enriches communications in Apple’s Messages app but also spans its Stickers and Tapbacks features, allowing users to react to messages with creativity.

    ### How to Use Genmoji

    To start creating Genmojis, users first need to ensure the Apple Intelligence feature is activated on their devices. The process begins by opening the Messages application, where users can initiate a new conversation or continue an existing chat. By tapping on the emoji keyboard at the bottom left, users are prompted to type a specific description into a search bar. Upon inputting the desired prompt, users can select the option labeled “Create New Emoji.”

    Within moments, Apple Intelligence processes the request and generates several versions of the emoji inspired by the provided description. Users can then choose their favorite design, which can be saved to their keyboard for future use by tapping “Add” in the upper right corner. Additionally, if a Genmoji is shared by someone else, it can be preserved by long-pressing the emoji and selecting “Emoji Details” to view the original prompt, followed by an option to download it.

    ### Availability and Launch

    Currently, Genmoji can be accessed by users in the iOS 18.2 public beta program, while the official public rollout is projected to occur in early December. Users should be aware that access to the beta may involve a wait, suggesting a phased rollout where users could experience varying timelines before being able to test the feature.

    ### Supported Devices

    The following Apple devices are compatible with Genmoji:

    – iPhone 15 Pro
    – iPhone 15 Pro Max
    – iPhone 16, 16 Plus, 16 Pro, 16 Max
    – iPad mini with A17 Pro chip
    – All iPads featuring Apple silicon

    Support for macOS Sequoia is also anticipated to follow soon; however, specific timing has not been confirmed.

    ### Regional and Language Support

    While Genmoji brings exciting capabilities, it’s worth noting that some regions like China and the EU will not have access due to regulatory constraints. Currently, Genmoji primarily supports U.S. English, with plans to extend its localization in late 2024 to Australia and language support for countries such as Canada, New Zealand, South Africa, and the U.K. Furthermore, by 2025, multiple languages including Chinese, English dialects for India and Singapore, as well as major European and Asian languages, are expected to be available, broadening the feature’s accessibility and usability for global audiences.

    ### Conclusion

    In summary, Apple’s Genmoji feature represents an innovative leap in personalized communication, allowing users to create custom emojis that reflect their unique expressions and sentiments. With ongoing advancements in Apple Intelligence and future expansions in device compatibility and language support, Genmoji is poised to enhance digital interaction, making messaging more relatable and enjoyable. As the official launch approaches, both current beta testers and potential future users can look forward to a transformative addition to their Apple ecosystem.

    Our Services