Hello, Welcome to the Sixth Exode testers report. I know it has been some time since the last report, things were in a time of adjustment and there was no real testing needing to be done.
Since now and the last report, Exode has made some direction changes. After some research, it has been found that a game must be free to play. This has been a trend for a while but lately I believe, many crypto gamers are tired of games not being designed to last. Most have turned out to be Ponzi schemes or were not designed to withstand a crypto winter.
This report will give some insight into the testing of the latest Exode game play, Challenges. This game play has opened the card collection to randomly generated missions which the players can attempt to solve with their cards. This game play is PvE, and rewards are earned by successfully completing the challenge. Currently rewards can be used to build a "Project" which helps to boost some of the cards in your card collection. This game play is fun as the dice rolls can be a deciding factor, meaning no matter how many cards you have, you may still fail.
I now am retired from a life of high tech development. Mostly in the communications area, where I was a tester of designs and future chips. All computer chips start as a software structure which is run on a simulator. I was the person who would test a design for compliance against some specification. During my last 5 years, I worked on a simulator design and was responsible for testing the implementation of the VHDL language in that tool. At the end I was building the tools I used to do previous work. In my opinion, the top of that industry.
When Elindos announced the "Free to Play" (FTP) direction I was excited to see what would come of it. Many of the FTP games are successful in that they generate income for the producer, even though they are free. At the same time as being excited I was a bit worried that the effort would be time consuming. I also had in mind the quality level expected in FTP arena. Having played a few of those games, I knew that there was a great amount of detail and effort put into them. I contacted Elindos and offered to assist with the testing side of the development effort. I have attempted to assist Elindos in the past and have never really got much of an acceptance. I made a few statements about the effort to produce the FTP and that testing is the only way to ensure any quality level. I had to convince him to take my efforts for free, as he can not pay for development work at this time. Being a player as well, and a small investor I want to see Exode succeed.
Elindos finally accepted my offer and work began. Only problem was that there were some features and game play that he wanted to produce before the FTP version. Something to give to the current player base as reward for dedication and investment.
Challenges !!!
A very interesting game play where card owners play against challenges presented by a random mission generation system. This game play mode instantly increased the utility of 90% of the card base.

Here and example Challenge screen with no team submitted.
The main target of the mission is the Mystery and it states what attributes are needed to possibly complete it.
Some cards can give Advantage points, and they are listed. The Rewards section states the rewards that will be given if the player is able to solve the Mystery. There is also a Bonus appreciation section where some skills are indicated as having a added bonus to the end score of the mission.

The Risks section details the colony or team challenges that must be overcome. Here is a Risk that the whole colony must pass.
This risk is for the Colony team only.
This risk is for the Away team only.
Based on the description of the challenge, players submit two, four card teams, that are tuned to meet the requirements and overcome the challenge missions. If successful, during the first round of the "season", the challenge is generated for a level 2 variant of the initial challenge. The player then attempts to beat the challenge but on the second round, cards from the first round, are on a one week cool down. This means the player has to create new teams to meet a very similar, but more difficult challenge. If a player successfully passes the level 2 challenge, they are granted a level 3 mission for the final round of the season. Each successfully completed mission earns the player some rewards. They are in the form of materials for bases and in some cases a random card. The results of all the players of the challenges are presented and the top scores receive some extra reward.
For the programming language used to create a test environment it was requested that it be done in python. Python is a language I have used a bit but do not consider myself a expert. I was given some API and a user / auth for access to the generation and evaluation systems to be tested. In the past I have used a testing system that involves the use of plain text command or stimulus files. Where the test environment reads in the command file and invokes the some code connected each command. At the start I questioned whether it was a good idea to implement a command file system using python. Within a few months of testing challenges, I was relived to see that the system really works well. I had only used it to test logic designs but it seems it fits in a software testing arena as well.
At the start I suggested that a bug tracking system be implemented so I can communicate bug details as well is enable tracking of issues. This was initially rejected as being "too much" work to implement / find a system and a few other excuses. I let it be and began reporting issues on Discord, which we had been using to communicate, of course. I put in about 5 issues in the first report on Discord and the next morning we had a tracking system implemented. I was very happy to see this as it gives Elindos and others a place, free of other text, to report and track issues. I was also very happy to see Elindos use the tracking system for features and other items which need to be tracked and not forgotten. IMO the issue tracking system was a great addition to the development and will prove fruitful in the future.
Initially tests were created to hit various basic features like generating challenges, submitting teams and checking for results, all done manually. This was time consuming as the challenge system is a random generation system. Random systems are the most difficult thing to test as they can not be known before they are created. A seed value enables the duplication of a generation, so the ability to replicate a challenge existed from day one.
Previous work involved creating a random generation system to apply tests to a known logic design. This testing is exactly the opposite, here the design is random and the test environment kind of fixed but flexible. After a couple weeks, having gained some knowledge, I decided to embark on a generation system of my own. One that would take a challenge and run all cards that would fit the risks and requirements against the challenge. The system would also record the cards used and indicate errors if results were not as expected. This ended up being 6+ weeks of effort and resulted in a 90% success rate for valid submissions.
One really useful thing about the test environment is that the command file can be made to be "re-entrant". Meaning that one command file can be put aside while a new one is loaded and run. Why is this cool? Because now the test environment can generate command files and then run them dynamically. The final generation system for full card coverage would request a challenge through the API, evaluate it, collect all the cards for colony and away teams. The number of cards found was always much more than could be put on one resolution API call so many submits were needed for each challenge. With all the cards known the system would generate command files until all the cards found have been applied to the challenge. This often resulted in command files with 20+ API calls to cover all cards that could be applied. To increase the coverage of cards and challenge risks many challenges were generated, each having the above process applied. Though the effort did not achieve 100%, many issues were uncovered.
The initial effort of testing the challenge system amounted to about 10 weeks of nearly full time activity. Testing is ongoing, as enhancements and additions are made to the game play. I feel that my contributions have lead to a better quality initial release of Challenges, and have enabled the development to proceed at a higher pace. It was also a lot of fun and I learned a lot of new things.
With challenges considered stable and playable, the development effort switched to the all anticipated game play called RAIDS!! Another testers report will cover Raid testing and game play.
Best
Excellent report! Thank you for the inciteful perspective! It helps me understand all the stuff going on amidst all the changes. It's nice to know Elindos has some help because he's so busy and I know he appreciates it. 👍
Thanks for this update and for your work! As a player, I'm truly ignorant to the amount of hard work that happens in the coding caves.
I've been wanting to make a post about RAIDS, myself, but any screenshots I'd share could give away my team's strategies. Our teams are evenly matched, I think, and we need every edge we can get.
!PIZZA
$PIZZA slices delivered:
@proto26(1/5) tipped @sckoarn
Please vote for pizza.witness!
Great post. I didn't understand most of it, but I'm enjoying challenges and raids a lot. can't wait to see how colonization is going...
Excellent report Sckoarn!
Challenges were a huge endeavour indeed, this and adding testing upgraded the quality of the game but gameplay increased development payload "threefold" by creating a lot of maintenance and sense of emergency. But after a while we can feel we have established something durable, and also collection traits were "warming up", so to speak, now they gained utility and even usage.
For others, please note that Sckoarn does not use testing in any way or capacity for his own advantage and actually often involves himself less in playing his own challenges and so much more in testing!
He also does not test any "real situation" (any specific raid or condition that actually exists for a player) and only tests collections virtually. So again there is no advantage here.
Python is a language that Sckoarn decided to use to make his own tools but we don't use it on our servers. On the technical side we share javascript asset files, screen logs and JSON results. For raids, we evolved with even more, separately and very detailed JSON log sections. So we are mostly language agnostic and proposed a secured API url.
We only work with people we can trust and Sckoarn has been dedicated, generous, available and trustworthy. Thank you Sckoarn for your support! :)