Insights
What caught our eye at this year’s WWDC?
Last month, Apple hosted their annual Worldwide Developers Conference (WWDC), showcasing the latest software innovations they’ve been working on. Our iOS team attended several sessions and shared their excitement about the announcements that stood out.
First up, let’s talk about Apple Intelligence. This was the headline-grabber of the conference. From what we’ve seen, Apple Intelligence is a significant leap in integrating AI into our daily lives. However, it won’t be available in the EU in 2024 and will initially only roll out on phones set to US English. We’re eager to get our hands on it once it’s released and will do a full write-up of our favourite features.
In the meantime, let’s dive into some other notable announcements from WWDC that our iOS team found intriguing.
iOS 18
WWDC is always our first glimpse at the new iOS version set to release later in the year. As usual, iOS 18 took the spotlight again this year, hardly a surprise given that this will roll out to the iPhone’s huge user base. So, aside from Apple Intelligence, what else did Apple unveil?
After years of complaints about the lack of customisation in iOS compared to Android, Apple is finally introducing more ways to personalise your iPhone. One small but impactful feature is the ability to change the colour of app icons, adding a touch of monochrome to your home screen.
Continuing the customisation trend, the Control Centre has received its first major redesign since iOS 11. The big change? Multiple pages. No longer limited to a single screen, you’ll now be able to organise your controls into logical groups. Plus, there are loads of new controls you can add and resize, making it quicker and easier to perform actions without opening specific apps.
There have also been exciting announcements about how we interact with our iPhones. Starting with AirPods, Apple is introducing the ability to interact with Siri by nodding or shaking your head while wearing them. This concept isn’t entirely new (Bose tried it with their short-lived Bose Frames in 2019), but with Apple’s backing, we’re keen to see how this develops.
Additionally, for iPhone 12 and later models and the iPhone SE (3rd generation), Apple is introducing an eye-tracking accessibility feature. Previously, this required a separate eye-tracking device connected to a computer, but recent advancements have made it viable on a phone. Similar to Android’s ‘Camera Switch’ (which requires your device to be securely mounted), we’re intrigued to see how this evolves human-computer interaction with mobile devices.
Development tools
Apple’s development tools also saw some exciting updates, particularly with Core ML, their system for leveraging Apple Silicon to run ML and AI models. You can read more about the Neural engine inside your iPhone here. The latest updates make these models run faster and more efficiently, with significant enhancements to the Vision Framework, including text extraction, face detection, and body pose recognition.
Xcode 16 was announced, featuring an improved code completion tool that uses machine learning to make context-based suggestions as you type. This promises to enhance productivity while keeping code completely private by running locally on the developer’s device.
Swift 6, the first major update in five years, introduces a new SwiftTesting framework, making it easier for developers to write tests and ensure their code functions correctly. Combined with the new code completion feature, we’re hopeful this will significantly boost efficiency in app development. Swift 6 also brings new features to aid in writing multithreaded apps, allowing for faster performance and better handling of simultaneous processes.
Vision Pro
At Atomic, we were thrilled to get an early demo of the Vision Pro—one of the first in the UK—and even more excited to get our own device. We paid close attention to updates in this area.
There were several updates to the VisionOS user interface, especially around photos and videos, and new interaction methods using a range of hand gestures.
From a development perspective, the key updates include Volumetric APIs, which allow two apps to run side by side, enhancing the Vision Pro’s usability. TabletopKit lets developers create multiplayer tabletop games with each player wearing their own Vision Pro, all connected via SharePlay. Additionally, new enterprise APIs open up access to new sensors. We’re eager to explore this further to see how it could benefit our clients.
Stay tuned for more in-depth reviews and insights as we get our hands on these new tools and features!