NCUA to unveil new RBC proposal Jan. 15

first_imgIt’s official. Credit unions will get their first look at a revised regulatory risk-based capital (RBC) plan from the National Credit Union Administration next Thursday. The new plan will be unveiled at the Jan. 15 open board meeting, according to an agenda released by the agency Thursday.The revised proposal will come out almost a year after the NCUA first proposed a RBC regime, one that met with a serious outcry from stakeholders, federal lawmakers and more.Jim Nussle, Credit Union National Association president/CEO, thanked the NCUA Thursday for slating the RBC discussion for the January meeting.“CUNA will review and carefully evaluate the second risk-based capital proposal from the NCUA at the agency’s Jan. 15 board meeting,” he said. “We remain deeply concerned regarding several aspects of the original risk-based capital proposal.“We know some of the concerns we raised in our comment letter are being considered in the second proposal, including, but not limited to: the 10.5% requirement to be well capitalized, risk weights, ensuring a second comment period and the allotted time for the implementation period.” continue reading » 3SHARESShareShareSharePrintMailGooglePinterestDiggRedditStumbleuponDeliciousBufferTumblrlast_img read more

Lessons from a life in marketing

first_img 26SHARESShareShareSharePrintMailGooglePinterestDiggRedditStumbleuponDeliciousBufferTumblr,Terry Van Ryhn Terry has over 30 years’ international experience in marketing communications, delivering top-calibre solutions to major clients across the globe via agencies in Detroit, Cape Town, London and the Isle of … Web: Details I have survived over 36 years in the advertising and marketing industry.  Even now I can still remember vividly the smell of ink and bromide as I walked to my desk on my first day as a packaging designer for an international household product company.  (This was when things were still created by hand on a drawing board, long before the advent of the Apple Mac!)This job was not my first choice but the advertising industry was experiencing one of its downward cycles and it was difficult for a freshly graduated designer to get into the game with no experience.After a couple of years designing floor polish, heartburn medication, deodorant and air freshener packaging I began to lose the will to live. I wanted to create impressive ad campaigns for heaven’s sake!  Only years later did I truly appreciate the valuable skills and lessons I learned as a packaging designer.  It’s all about the detail, detail, detail!With working in a big ad agency still an elusive dream, I decided to start my own design studio with a few other out-of-work artist friends.  It’s easy when you have nothing to lose!  I would set off in the morning from my small apartment in the city to find work and then return back in the evening to start creating – often working through the night to deliver designs the next day.  Jobs included spray painting a Willy’s Jeep in pink camouflage, designing and airbrushing surfboards and producing the occasional ad for a local bar or swimming pool manufacturer.Fast forward a few years and I have joined forces with two other ad agencies to become known as the best little “creative hot shop” in town.  It is during this time I truly started honing my skills as a creative director by collaborating with some very talented copywriters, artists, and brand strategists.  After a few more years’ experience we started landing the big international clients which then resulted in the big budget TV and radio commercials, the glossy double page magazine spreads and spending weeks on glamorous location shoots.  Finally, I was now living in the world of my dreams from all those years ago!We eventually sold our not-so-little-anymore “creative hot shop” to the mighty Young & Rubicam in the early 90s and thus started another chapter.  By now I had assumed a client-facing brand strategy role, or as it was fondly referred to, I became a “suit”.  It’s still fairly rare to bridge the gap between the strategic marketing and creative worlds but it has worked for me and I am proud to have had some of the world’s leading names among my clients over the years. These include Moet et Chandon, Hennessey Cognac, Baileys Irish Cream, KPMG, Porsche, DuPont, Remington, Cuna, Chevron and New York Life.So, what have I learned these past three and a bit decades and are there any pieces of wisdom I wish to impart?  There is nothing new I can tell anyone they may not already know, but here is my process when it comes to branding, marketing, and the creative process.Strategy:Identify and have clear business objectives.Talk to your members, clients, and suppliers. Ask them how you are doing and if there are things you can do to improve the relationship.Compile a detailed marketing strategy plan as your foundation on which you build your brand’s positioning, proposition, and creative execution. I’m a fan of Young & Rubicam’s Brand Asset Valuator model which has four key pillars – differentiation and relevance that relate to brand strength and esteem and knowledge that relate to brand stature.  I still use that formula to shape and position brands.  Identify your story:Only once a clear strategy is in place and the key propositions have been identified should you engage the creative process.  I always start with the copy first which is typically the most difficult code to crack!  Don’t settle for the first cute headline you come up with – explore, search for ways to capture someone’s imagination.   Tell a story.  Listening to and telling stories are part of our DNA and go all the way back to cave paintings and tribal dancing. Stories make us feel something, not just hear it.  The most successful brands anchor their stories to a powerful purpose, normally underpinned by finding the truth in your brand.  Whatever your brand story, believe in it – tell the truth and make people care. Remember a brand develops like any personal relationship.  You enjoy being around someone because you share common values. Over time both parties demonstrate their loyalty and mutual trust and a bond develops.   The business guru Peter Drucker said: “The purpose of business is to create and maintain a customer.”  This is a powerful statement if you think about it for a minute. Everything you do in business, in any sector or industry, relates to this sentence.  It’s not about making a profit in business – that will naturally happen when you get the first bit right – but creating a customer!  The creative look and feel:Half the creative job is done when you have identified a story to tell and the copy is simple and compelling.Simplicity is key in the creative process.  Distil the proposition down to its pure essence so your message is crystal clear.Find a visual theme that can support the story narrative. Do not automatically rush to find some inspirational images in photo libraries.  Steer clear of those happy office workers, handshakes, rowers or mountain climbers that depict team work.  Puzzle pieces and butterflies are also a few of my pet hates. Stand out and be different.  There is a great quote on my office wall by Seth Godin: “How dare you settle for less when the world has made it so easy for you to be remarkable.”Any communication piece, be it a social media post, a newspaper ad or a direct mail flyer, should capture the reader’s attention and imagination, compelling them to respond.Your quest is to find the emotional triggers in the story you want people to believe and feel about your brand.       In simple terms good marketing is about finding the truth in your brand and delivering a compelling story. And trying to have some fun along the way!last_img read more

Duke to extend suspended disconnections past state moritorium

first_imgStatewide —Duke Energy Indiana will continue to suspend service disconnections for nonpayment for an additional month beyond the state’s current moratorium on disconnection for nonpayment. Customers who are experiencing financial hardship due to the COVID-19 pandemic now have until September 15 to settle their accounts or make payment arrangements.Leading up to the deadline, Duke Energy is offering customers in need the opportunity to establish payment plans for up to six months in length. The company is also urging eligible customers to take advantage of additional Low-Income Home Energy Assistance Program funds available through statewide community action agencies due to the pandemic.In response to the COVID-19 pandemic in March, the company immediately launched a sweeping series of steps to help customers, including suspending disconnections for non-payment, as well as late-payment fees and residential fees for credit card payments and other payment types.last_img read more

Serena Williams announces baby’s birth

first_imgWilliams wrote Wednesday that their … Go ahead and add another title to Serena Williams’ collection: Mom.The tennis star announced via social media on Wednesday that she gave birth to a girl named Alexis Olympia Ohanian Jr.Williams posted about her baby on Instagram and Twitter accounts and is heard saying in a video, “We had a lot of complications, but look what we’ve got.”The 35-year-old Williams said in late December that she was engaged to Reddit co-founder Alexis Ohanian.last_img

NCAA Season 93 Preview: Lyceum Pirates challenged by dark horse label

first_imgChina furious as Trump signs bills in support of Hong Kong For the complete collegiate sports coverage including scores, schedules and stories, visit Inquirer Varsity. Another vape smoker nabbed in Lucena The Pirates have never won the big prize since they entered the league in 2011, but head coach Topex Robinson sees bigger things happening in Season 93.“We’re looking at the big picture, our goal is to be known as a good basketball program,” said Robinson. “We have to remind these guys to always be hungry.”FEATURED STORIESSPORTSSEA Games: Biñan football stadium stands out in preparedness, completionSPORTSPrivate companies step in to help SEA Games hostingSPORTSWin or don’t eat: the Philippines’ poverty-driven, world-beating pool stars“We’re looking past the championships, ours is the culture we want to promote. That’s bigger than the championship, the championship is the result of what we’re doing right now.”Robinson knows Lyceum will be on the crosshairs of the other NCAA teams and he’s looking for the challenge of trying to break the almost decade-long championship monopoly of San Beda. Lacson: SEA Games fund put in foundation like ‘Napoles case’ Don’t miss out on the latest news and information. NHL free agents following NBA path in picking destinations Ethel Booba on hotel’s clarification that ‘kikiam’ is ‘chicken sausage’: ‘Kung di pa pansinin, baka isipin nila ok lang’ “We’ll be marked in the season but that’s part of the challenge, we can be in the bottom and be contented or we could be there with the pressure of being out of the comfort zone, this is exciting for us.”Spearheading the Pirates campaign is versatile swingman CJ Perez, who once played for San Sebastian and stayed for one year at Ateneo.Perez, though, is not alone in Lyceum’s pursuit of excellence.The Marcelino twins of Jaycee and Jayvee are welcome additions to Robinson’s up-tempo style of play with the Pirates utilizing a press-and-trap defense, relentless attacks and back cuts to free up space.Wilson Baltazar, a proven shooter, is one of the main guys that will space the floor for the Pirates while big man Mike Nzeusseu provides the tenacity underneath.ADVERTISEMENT Head coach: Topex RobinsonLast season: 6-12 (9th)Holdovers: MJ Ayaay, Wilson Baltazar, Mike NzeusseuKey losses: Shaq Alanes, Jebb Bulawan, Joseph GabayniKey additions: CJ Perez, Jaycee Marcelino, Jayvee Marcelino, Spencer PrettaFor the past years, Lyceum has never been considered a serious threat to the NCAA title with traditional power San Beda almost making the championship its personal property.ADVERTISEMENT Robredo: True leaders perform well despite having ‘uninspiring’ boss PLAY LIST 02:49Robredo: True leaders perform well despite having ‘uninspiring’ boss02:42PH underwater hockey team aims to make waves in SEA Games01:44Philippines marks anniversary of massacre with calls for justice01:19Fire erupts in Barangay Tatalon in Quezon City01:07Trump talks impeachment while meeting NCAA athletes02:49World-class track facilities installed at NCC for SEA Gamescenter_img LOOK: Jane De Leon meets fellow ‘Darna’ Marian Rivera View comments NCAA Season 93 Preview: Lyceum Pirates300 viewsSportsVentuno Web Player 4.51 LATEST STORIES Pagasa: Kammuri now a typhoon, may enter PAR by weekend Pagasa: Kammuri now a typhoon, may enter PAR by weekend Nikki Valdez rushes self to ER due to respiratory tract infection Sports Related Videospowered by AdSparcRead Next MOST READlast_img read more

Bahamians launch independence events with Souse Out on Saturday

first_img Church of God of Prophecy gets new Minister Recommended for you Bahamian man busted; Prosecutor unable to make smuggling case stick Related Items:bahamian, church of god of prophecy, comedy, david wallace, halleluiah boys, independence day, sheep toungue souse, will stubbs Facebook Twitter Google+LinkedInPinterestWhatsApp Facebook Twitter Google+LinkedInPinterestWhatsAppProvidenciales, 11 Jun 2015 – The One Bahamas Association of the Turks and Caicos forges ahead with plans to commemorate Bahamian Independence in the Turks and Caicos, despite recent remarks that the events are not welcomed. A souse out is planned for the TCI Bank parking lot on Saturday, as the organization which last year donated to the HOPE Foundation for Autism Awareness gears up for a number of activities surrounding the July 10th 42nd birthday of independent Bahamas. The Souse Out will feature Bahamian favorites including Sheep Tongue Souse and starts as early at 6:30am, this Saturday. Also this weekend Bahamian comedians David Wallace and Will Stubbs will be on island for a Hallelujah Boys show at the Church of God of Prophecy in Five Cays; that happens on Saturday night. Johnny Kemp confirmed dead by Bahamas Consul in Jamaicalast_img read more

AI for Unity game developers How to emulate realworld senses in your

first_imgAn AI character system needs to be aware of its environment such as where the obstacles are, where the enemy is, whether the enemy is visible in the player’s sight, and so on. The quality of our  Non-Player Character (NPC’s) AI completely depends on the information it can get from the environment. Nothing breaks the level of immersion in a game like an NPC getting stuck behind a wall. Based on the information the NPC can collect, the AI system can decide which logic to execute in response to that data. If the sensory systems do not provide enough data, or the AI system is unable to properly take action on that data, the agent can begin to glitch, or behave in a way contrary to what the developer, or more importantly the player, would expect. Some games have become infamous for their comically bad AI glitches, and it’s worth a quick internet search to find some videos of AI glitches for a good laugh. In this article, we’ll learn to implement AI behavior using the concept of a sensory system similar to what living entities have. We will learn the basics of sensory systems, along with some of the different sensory systems that exist. You are reading an extract from Unity 2017 Game AI programming – Third Edition, written by Ray Barrera, Aung Sithu Kyaw, Thet Naing Swe. Basic sensory systems Our agent’s sensory systems should believably emulate real-world senses such as vision, sound, and so on, to build a model of its environment, much like we do as humans. Have you ever tried to navigate a room in the dark after shutting off the lights? It gets more and more difficult as you move from your initial position when you turned the lights off because your perspective shifts and you have to rely more and more on your fuzzy memory of the room’s layout. While our senses rely on and take in a constant stream of data to navigate their environment, our agent’s AI is a lot more forgiving, giving us the freedom to examine the environment at predetermined intervals. This allows us to build a more efficient system in which we can focus only on the parts of the environment that are relevant to the agent. The concept of a basic sensory system is that there will be two components, Aspect and Sense. Our AI characters will have senses, such as perception, smell, and touch. These senses will look out for specific aspects such as enemies and bandits. For example, you could have a patrol guard AI with a perception sense that’s looking for other game objects with an enemy aspect, or it could be a zombie entity with a smell sense looking for other entities with an aspect defined as a brain. For our demo, this is basically what we are going to implement—a base interface called Sense that will be implemented by other custom senses. In this article, we’ll implement perspective and touch senses. Perspective is what animals use to see the world around them. If our AI character sees an enemy, we want to be notified so that we can take some action. Likewise with touch, when an enemy gets too close, we want to be able to sense that, almost as if our AI character can hear that the enemy is nearby. Then we’ll write a minimal Aspect class that our senses will be looking for. Cone of sight A raycast is a feature in Unity that allows you to determine which objects are intersected by a line cast from a point in a given direction. While this is a fairly efficient way to handle visual detection in a simple way, it doesn’t accurately model the way vision works for most entities. An alternative to using the line of sight is using a cone-shaped field of vision. As the following figure illustrates, the field of vision is literally modeled using a cone shape. This can be in 2D or 3D, as appropriate for your type of game: The preceding figure illustrates the concept of a cone of sight. In this case, beginning with the source, that is, the agent’s eyes, the cone grows, but becomes less accurate with distance, as represented by the fading color of the cone. The actual implementation of the cone can vary from a basic overlap test to a more complex realistic model, mimicking eyesight. In a simple implementation, it is only necessary to test whether an object overlaps with the cone of sight, ignoring distance or periphery. A complex implementation mimics eyesight more closely; as the cone widens away from the source, the field of vision grows, but the chance of getting to see things toward the edges of the cone diminishes compared to those near the center of the source. Hearing, feeling, and smelling using spheres One very simple yet effective way of modeling sounds, touch, and smell is via the use of spheres. For sounds, for example, we can imagine the center as being the source and the loudness dissipating the farther from the center the listener is. Inversely, the listener can be modeled instead of, or in addition to, the source of the sound. The listener’s hearing is represented by a sphere, and the sounds closest to the listener are more likely to be “heard.” We can modify the size and position of the sphere relative to our agent to accommodate feeling and smelling. The following figure represents our sphere and how our agent fits into the setup: As with sight, the probability of an agent registering the sensory event can be modified, based on the distance from the sensor or as a simple overlap event, where the sensory event is always detected as long as the source overlaps the sphere. Expanding AI through omniscience In a nutshell, omniscience is really just a way to make your AI cheat. While your agent doesn’t necessarily know everything, it simply means that they can know anything. In some ways, this can seem like the antithesis to realism, but often the simplest solution is the best solution. Allowing our agent access to seemingly hidden information about its surroundings or other entities in the game world can be a powerful tool to provide an extra layer of complexity. In games, we tend to model abstract concepts using concrete values. For example, we may represent a player’s health with a numeric value ranging from 0 to 100. Giving our agent access to this type of information allows it to make realistic decisions, even though having access to that information is not realistic. You can also think of omniscience as your agent being able to use the force or sense events in your game world without having to physically experience them. While omniscience is not necessarily a specific pattern or technique, it’s another tool in your toolbox as a game developer to cheat a bit and make your game more interesting by, in essence, bending the rules of AI, and giving your agent data that they may not otherwise have had access to through physical means. Getting creative with sensing While cones, spheres, and lines are among the most basic ways an agent can see, hear, and perceive their environment, they are by no means the only ways to implement these senses. If your game calls for other types of sensing, feel free to combine these patterns. Want to use a cylinder or a sphere to represent a field of vision? Go for it. Want to use boxes to represent the sense of smell? Sniff away! Using the tools at your disposal, come up with creative ways to model sensing in terms relative to your player. Combine different approaches to create unique gameplay mechanics for your games by mixing and matching these concepts. For example, a magic-sensitive but blind creature could completely ignore a character right in front of them until they cast or receive the effect of a magic spell. Maybe certain NPCs can track the player using smell, and walking through a collider marked water can clear the scent from the player so that the NPC can no longer track him. As you progress through the book, you’ll be given all the tools to pull these and many other mechanics off—sensing, decision-making, pathfinding, and so on. As we cover some of these techniques, start thinking about creative twists for your game. Setting up the scene In order to get started with implementing the sensing system, you can jump right into the example provided for this article, or set up the scene yourself, by following these steps: Let’s create a few barriers to block the line of sight from our AI character to the tank. These will be short but wide cubes grouped under an empty game object called Obstacles. Add a plane to be used as a floor. Then, we add a directional light so that we can see what is going on in our scene. As you can see in the example, there is a target 3D model, which we use for our player, and we represent our AI agent using a simple cube. We will also have a Target object to show us where the tank will move to in our scene. For simplicity, our example provides a point light as a child of the Target so that we can easily see our target destination in the game view. Our scene hierarchy will look similar to the following screenshot after you’ve set everything up correctly: Now we will position the tank, the AI character, and walls randomly in our scene. Increase the size of the plane to something that looks good. Fortunately, in this demo, our objects float, so nothing will fall off the plane. Also, be sure to adjust the camera so that we can have a clear view of the following scene: With the essential setup out of the way, we can begin tackling the code for driving the various systems. Setting up the player tank and aspect Our Target object is a simple sphere game object with the mesh render removed so that we end up with only the Sphere Collider. Look at the following code in the Target.cs file: using UnityEngine;public class Target : MonoBehaviour{public Transform targetMarker;void Start (){}void Update (){int button = 0;//Get the point of the hit position when the mouse is being clickedif(Input.GetMouseButtonDown(button)) {Ray ray = Camera.main.ScreenPointToRay(Input.mousePosition);RaycastHit hitInfo;if (Physics.Raycast(ray.origin, ray.direction, out hitInfo)) {Vector3 targetPosition = hitInfo.point;targetMarker.position = targetPosition;}}}} You’ll notice we left in an empty Start method in the code. While there is a cost in having empty Start, Update, and other MonoBehaviour events that don’t do anything, we can sometimes choose to leave the Start method in during development, so that the component shows an enable/disable toggle in the inspector. Attach this script to our Target object, which is what we assigned in the inspector to the targetMarker variable. The script detects the mouse click event and then, using a raycast, it detects the mouse click point on the plane in the 3D space. After that, it updates the Target object to that position in the world space in the scene. A raycast is a feature of the Unity Physics API that shoots a virtual ray from a given origin towards a given direction, and returns data on any colliders hit along the way. Implementing the player tank Our player tank is the simple tank model with a kinematic rigid body component attached. The rigid body component is needed in order to generate trigger events whenever we do collision detection with any AI characters. The first thing we need to do is to assign the tag Player to our tank. The isKinematic flag in Unity’s Rigidbody component makes it so that external forces are ignored, so that you can control the Rigidbody entirely from code or from an animation, while still having access to the Rigidbody API. The tank is controlled by the PlayerTank script, which we will create in a moment. This script retrieves the target position on the map and updates its destination point and the direction accordingly. The code in the PlayerTank.cs file is as follows: using UnityEngine;public class PlayerTank : MonoBehaviour {public Transform targetTransform;public float targetDistanceTolerance = 3.0f;private float movementSpeed;private float rotationSpeed;// Use this for initializationvoid Start () {movementSpeed = 10.0f;rotationSpeed = 2.0f;}// Update is called once per framevoid Update () {if (Vector3.Distance(transform.position, targetTransform.position) Vector3 targetPosition = targetTransform.position;targetPosition.y = transform.position.y;Vector3 direction = targetPosition – transform.position;Quaternion tarRot = Quaternion.LookRotation(direction);transform.rotation = Quaternion.Slerp(transform.rotation, tarRot, rotationSpeed * Time.deltaTime);transform.Translate(new Vector3(0, 0, movementSpeed * Time.deltaTime));}} The preceding screenshot shows us a snapshot of our script in the inspector once applied to our tank. This script queries the position of the Target object on the map and updates its destination point and the direction accordingly. After we assign this script to our tank, be sure to assign our Target object to the targetTransform variable. Implementing the Aspect class Next, let’s take a look at the Aspect.cs class. Aspect is a very simple class with just one public enum of type AspectTypes called aspectType. That’s all of the variables we need in this component. Whenever our AI character senses something, we’ll check the aspectType to see whether it’s the aspect that the AI has been looking for. The code in the Aspect.cs file looks like this: using UnityEngine;public class Aspect : MonoBehaviour {public enum AspectTypes {PLAYER,ENEMY,}public AspectTypes aspectType;} Attach this aspect script to our player tank and set the aspectType to PLAYER, as shown in the following screenshot: Creating an AI character Our NPC will be roaming around the scene in a random direction. It’ll have the following two senses: The perspective sense will check whether the tank aspect is within a set visible range and distance The touch sense will detect if the enemy aspect has collided with its box collider, which we’ll be adding to the tank in a later step Because our player tank will have the PLAYER aspect type, the NPC will be looking for any aspectType not equal to its own. The code in the Wander.cs file is as follows: using UnityEngine;public class Wander : MonoBehaviour {private Vector3 targetPosition;private float movementSpeed = 5.0f;private float rotationSpeed = 2.0f;private float targetPositionTolerance = 3.0f;private float minX;private float maxX;private float minZ;private float maxZ; void Start() {minX = -45.0f;maxX = 45.0f;minZ = -45.0f;maxZ = 45.0f;//Get Wander PositionGetNextPosition();}void Update() {if (Vector3.Distance(targetPosition, transform.position) void GetNextPosition() {targetPosition = new Vector3(Random.Range(minX, maxX), 0.5f, Random.Range(minZ, maxZ));}} The Wander script generates a new random position in a specified range whenever the AI character reaches its current destination point. The Update method will then rotate our enemy and move it toward this new destination. Attach this script to our AI character so that it can move around in the scene. The Wander script is rather simplistic. Using the Sense class The Sense class is the interface of our sensory system that the other custom senses can implement. It defines two virtual methods, Initialize and UpdateSense, which will be implemented in custom senses, and are executed from the Start and Update methods, respectively. Virtual methods are methods that can be overridden using the override modifier in derived classes. Unlike abstract classes, virtual classes do not require that you override them. The code in the Sense.cs file looks like this: using UnityEngine;public class Sense : MonoBehaviour {public bool enableDebug = true;public Aspect.AspectTypes aspectName = Aspect.AspectTypes.ENEMY;public float detectionRate = 1.0f;protected float elapsedTime = 0.0f;protected virtual void Initialize() { }protected virtual void UpdateSense() { }// Use this for initializationvoid Start () {elapsedTime = 0.0f;Initialize();}// Update is called once per framevoid Update () {UpdateSense();}} The basic properties include its detection rate to execute the sensing operation, as well as the name of the aspect it should look for. This script will not be attached to any of our objects since we’ll be deriving from it for our actual senses. Giving a little perspective The perspective sense will detect whether a specific aspect is within its field of view and visible distance. If it sees anything, it will take the specified action, which in this case is to print a message to the console. The code in the Perspective.cs file looks like this: using UnityEngine;public class Perspective : Sense{public int fieldOfView = 45;public int viewDistance = 100;private Transform playerTransform;private Vector3 rayDirection;protected override void Initialize() {playerTransform = GameObject.FindGameObjectWithTag(“Player”).transform;} protected override void UpdateSense() {elapsedTime += Time.deltaTime;if (elapsedTime >= detectionRate) {DetectAspect();}} //Detect perspective field of view for the AI Charactervoid DetectAspect(){RaycastHit hit;rayDirection = playerTransform.position – transform.position;if ((Vector3.Angle(rayDirection, transform.forward)) ();if (aspect != null){//Check the aspectif (aspect.aspectType != aspectName){print(“Enemy Detected”);}}}}} We need to implement the Initialize and UpdateSense methods that will be called from the Start and Update methods of the parent Sense class, respectively. In the DetectAspect method, we first check the angle between the player and the AI’s current direction. If it’s in the field of view range, we shoot a ray in the direction that the player tank is located. The ray length is the value of the visible distance property. The Raycast method will return when it first hits another object. This way, even if the player is in the visible range, the AI character will not be able to see if it’s hidden behind the wall. We then check for an Aspect component, and it will return true only if the object that was hit has an Aspect component and its aspectType is different from its own. The OnDrawGizmos method draws lines based on the perspective field of view angle and viewing distance so that we can see the AI character’s line of sight in the editor window during play testing. Attach this script to our AI character and be sure that the aspect type is set to ENEMY. This method can be illustrated as follows: void OnDrawGizmos() { if (playerTransform == null) { return; }Debug.DrawLine(transform.position, playerTransform.position,;Vector3 frontRayPoint = transform.position + (transform.forward * viewDistance);//Approximate perspective visualizationVector3 leftRayPoint = frontRayPoint;leftRayPoint.x += fieldOfView * 0.5f;Vector3 rightRayPoint = frontRayPoint;rightRayPoint.x -= fieldOfView * 0.5f;Debug.DrawLine(transform.position, frontRayPoint,;Debug.DrawLine(transform.position, leftRayPoint,;Debug.DrawLine(transform.position, rightRayPoint,;}} Touching is believing The next sense we’ll be implementing is Touch.cs, which triggers when the player tank entity is within a certain area near the AI entity. Our AI character has a box collider component and its IsTrigger flag is on. We need to implement the OnTriggerEnter event, which will be called whenever another collider enters the collision area of this game object’s collider. Since our tank entity also has a collider and rigid body components, collision events will be raised as soon as the colliders of the AI character and player tank collide. Unity provides two other trigger events besides OnTriggerEnter: OnTriggerExit and OnTriggerStay. Use these to detect when a collider leaves a trigger, and to fire off every frame that a collider is inside the trigger, respectively. The code in the Touch.cs file is as follows: using UnityEngine;public class Touch : Sense{void OnTriggerEnter(Collider other){Aspect aspect = other.GetComponent();if (aspect != null){//Check the aspectif (aspect.aspectType != aspectName){print(“Enemy Touch Detected”);}}}} Our sample NPC and tank have BoxCollider components on them already. The NPC has its sensor collider set to IsTrigger = true . If you’re setting up the scene on your own, make sure you add the BoxCollider component yourself, and that it covers a wide enough area to trigger easily for testing purposes. Our trigger can be seen in the following screenshot: The previous screenshot shows the box collider on our enemy AI that we’ll use to trigger the touch sense event. In the following screenshot, we can see how our AI character is set up: For demo purposes, we just print out that the enemy aspect has been detected by the touch sense, but in your own games, you can implement any events and logic that you want. Testing the results Hit play in the Unity editor and move the player tank near the wandering AI NPC by clicking on the ground to direct the tank to move to the clicked location. You should see the Enemy touch detected message in the console log window whenever our AI character gets close to our player tank: The previous screenshot shows an AI agent with touch and perspective senses looking for another aspect. Move the player tank in front of the NPC, and you’ll get the Enemy detected message. If you go to the editor view while running the game, you should see the debug lines being rendered. This is because of the OnDrawGizmos method implemented in the perspective Sense class. To summarize, we introduced the concept of using sensors and implemented two distinct senses—perspective and touch—for our AI character. If you enjoyed this excerpt, check out the book Unity 2017 Game AI Programming – Third Edition, to explore the brand-new features in Unity 2017. Read Next: How to use arrays, lists, and dictionaries in Unity for 3D game development How to create non-player Characters (NPC) with Unity 2018last_img read more