Blending UI and AI as a Game Artist
It’s Time for Growth – with Vera Kozlenko
At Tactile, we encourage our team members to take ownership of their own development and drive their upskilling to match the needs of the company, as well as their own aspirations.
The desire to upskill oneself is crucial for the success of our business, so having team members with that mindset is what we believe is part of pushing us forward.
We chatted to our UI Game Artist, Vera Kozlenko, about her development into becoming the most passionate AI Artist within the company 👏
Read more about how she upskills by blending UI and AI below 🚀

Tactile: Hi Vera, can you tell us a little bit about what your role was when you started at Tactile?
Vera: I initially applied for a 2D Artist position but was instead offered the role of a UI Artist.
In this capacity, I joined Lily’s Garden Product team, where I worked on creating new in-game features and other UI-related elements, contributing directly to the game’s user experience and visual polish.
Tactile: How have you grown and developed in your role since then?
Vera: Although my background was originally focused on illustration and 3D art, working on Lily’s Garden sparked my interest in expanding my technical expertise.
I closed all knowledge gaps on my own and significantly improved my Unity and animation skills. Over time, I moved to creating animations directly in Unity, and in recent months, I’ve grown passionate about experimenting with shaders to create visual effects.
Tactile: So how did you end up diving deeper and deeper into the world of AI?
Vera: I have a Master’s in Mathematics and Applied Computer Science, so I’ve always been drawn to both the visual and technical sides of art. Working as a UI Artist gave me the ideal opportunity to combine my strong technical foundation with a creative role, especially now with the rise of AI tools. I explored how other artists use AI in their workflows and was inspired by the quality and originality of their results.
The ComfyUI interface didn’t intimidate me, so I was very curious, motivated and felt a strong drive to innovate. And so I began setting aside 1–2 days each sprint to learn and experiment with new AI models. This self-driven exploration helped me bring technical innovation into our production pipeline and evolve my creative process.
It works great for concept art.
Tactile: How has your day-to-day changed since you incorporated more AI into your workflows?
Vera: I continue to work as a UI Artist, but AI has become a core part of my daily workflow. I use our in-house plugin for prototyping, sketching, and polishing assets. I can now create any asset using AI alone, or combine it with 3D or 2D, depending on the needs of the project. I really enjoy working with image-to-image workflows – taking a sketch or a 3D model and turning it into a polished, visually appealing result.
For one task, I generated a depth map from my 3D model and used a ControlNet-based workflow to let AI handle the color pass. It helped me to quickly explore and find color combinations I really liked.
I tested it for fun, but in the end, I just finished the chests by hand.
My daily routine has also evolved: I help to maintain our internal AI tools, monitor the status of our AI machines and our custom Tactile Photoshop plugin, and make sure everything runs smoothly. I assist artists when something breaks or gets stuck, run short training sessions, and support anyone interested in using AI in their creative workflow.
Tactile: Do you have any tips for artists learning about how they can use AI in their work? Where is a good place to start?
Vera: If you’re just starting to explore AI in art, my advice is: watch how other artists work. See what models they use, how they build their pipelines, and what kind of results they get. Take a moment to reflect on what exactly you admire in their results, and try to understand what personally resonates with you in those interests. For example, I never used to enjoy creating characters, but now the idea of designing one doesn’t feel intimidating anymore.
When it comes to more technical topics, my advice is this: try building your own workflows with Stable Diffusion or Flux using ComfyUI. Start with something simple like text-to-image, then move on to image-to-image, learn masks, and so on. That’s the foundation. After that, you can explore custom styles and more advanced setups.
Don’t expect perfect results right away – that’s totally normal. It takes time to understand how to get the look you want. And most importantly, remember: AI is just another tool. It won’t replace your creativity, but it can help you work faster, try new things, and bring your ideas to life in new ways.
Tactile: What do your development plans look like for the future?
Vera: I’m excited to explore new AI models for 3D and video generation. It’s a direction I want to spend more time on in the coming months. I’m especially interested in models that can run locally in ComfyUI. For now, I’m focused on testing the latest updates to the model we use, since staying up to date is essential when things are moving this fast.
Thank you for the lovely chat, Vera 💜