Warning: Constant WP_CRON_LOCK_TIMEOUT already defined in /home3/softwbw3/public_html/wp-config.php on line 91

Warning: Constant AUTOSAVE_INTERVAL already defined in /home3/softwbw3/public_html/wp-config.php on line 92

Warning: Constant WP_POST_REVISIONS already defined in /home3/softwbw3/public_html/wp-config.php on line 93

Warning: Constant EMPTY_TRASH_DAYS already defined in /home3/softwbw3/public_html/wp-config.php on line 94

Warning: Cannot modify header information - headers already sent by (output started at /home3/softwbw3/public_html/wp-config.php:91) in /home3/softwbw3/public_html/wp-content/plugins/kboard/rss.php on line 12
SoftwareTesters.net https://softwaretesters.net 120,000 testers from 180 countries <![CDATA[SileniusStor is specialized in reselling Cheap Aquafadas software online.]]> Kaufen sie gunstig Siemens Solid Edge, the price difference with the official website is 15%!!! Tell us, do you think this is a good buy? Acheter a prix reduit 3ds Max 2019]]> Thu, 15 Sep 2022 16:56:32 +0000 <![CDATA[CSStore is specialized in reselling Cheap Steinberg software online.]]> Order Cheap Alias Surface 2021, the price difference with the official website is 35%!!! Tell us, do you think this is a good buy? Order Cheap Edge Animate CC]]> Sun, 04 Sep 2022 08:38:36 +0000 <![CDATA[Software Tester Community Website]]> ]]> Sun, 11 Apr 2021 10:18:33 +0000 <![CDATA[Running Selenium from jMeter]]> Initial Notes
This is my first post so please have mercy. I'm writing this hoping that I will save some time for whoever tries to follow me into the madness of running selenium tests in a headless browser from jMeter in a docker container, thus enabling performance. 
I don't like writing stories so I'm keeping this a list/point format so it's easy to understand and find the things I though are useful.  You can find the script attached and details on how it works bellow. 

Why

 - Easy way to deliver tests to CI/CD pipeline integrators 
 - Flexible as you can config jmx to be run locally or on the CI/CD pipeline, switch easily between them 
 - Assert vital fields inside the test using selenium directly (no testng or junit required)
 - jMeter logging offers clues to what happens inside docker
 - Any system that can run java, can run your test

How

 - Selenium commands are written in JSR samplers in jMeter 
 - Pass the browser as object between samplers 
 - Use jMeter logging to log progress, errors and more

Setup

 - Install jMeter and its plugin manager
 - Install the Selenium Support plugin

Implement script

 - Start the browser depending on some variables/parameters defined in JMeter
if ("true".equals(vars.get("runLocal")))
        System.setProperty("webdriver.gecko.driver", vars.get("setLocal"));
else
  System.setProperty("webdriver.gecko.driver", vars.get("setMsmt"));  
FirefoxBinary firefoxBinary = new FirefoxBinary();
if (vars.get("headless") == "true") firefoxBinary.addCommandLineOptions("--headless");  
FirefoxOptions firefoxOptions = new FirefoxOptions();
firefoxOptions.setBinary(firefoxBinary);
WebDriver driver = new FirefoxDriver(firefoxOptions);
WebDriverWait wait = new WebDriverWait(driver, 50);
JavascriptExecutor js = (JavascriptExecutor) driver;
driver.get("${msm_url}"); // Navigate to website
driver.manage().window().maximize();

 - Pass the browser as object between sampler:
vars.putObject("driver", driver)
WebDriver driver = vars.getObject("driver");

 - Assertions and jMeter logging functionality
boolean asserMailbox = driver.findElement(By.xpath("//a[contains(text(),'Mailbox')]")).isDisplayed();
if (asserMailbox) {
    log.info("Mailbox tab is displayed: " + asserMailbox);
} else {
    log.info("SeleniumTestInvalidated" + driver.findElement(By.xpath("//a[contains(text(),'Mailbox')]")));
}

Selenium headless quirks and other fun stuff

Ok so this section is what I believe is most helpful and I want to share with you. I encountered a lot of problems in my app (checking the jmx will prove that but these are the most helpful tips I can offer anyone)
 - Scrolling options:
Require a JavascriptExecutor to be defined. The objects can be passed between samplers. 
import org.openqa.selenium.JavascriptExecutor;
WebDriver driver = vars.getObject("driver");
JavascriptExecutor js = (JavascriptExecutor) driver;
js.executeScript("arguments[0].scrollIntoView({block: 'center'});", driver.findElement(By.xpath("(//li[@id='tui-pin']//i[@class='fas fa-trash mr-1'])[2]")));
js.executeScript("window.scrollTo(0,0)"); //Scroll to top

 - Print Screens:
import java.io.File; 
import org.apache.commons.io.FileUtils; 
import org.openqa.selenium.OutputType; 
import org.openqa.selenium.TakesScreenshot;
public static void takeSnapShot(WebDriver webdriver,String fileWithPath) throws Exception{
//Convert web driver object to TakeScreenshot
TakesScreenshot scrShot =((TakesScreenshot)webdriver);
//Call getScreenshotAs method to create image file
File SrcFile=scrShot.getScreenshotAs(OutputType.FILE);
//Move image file to new destination
File DestFile=new File(fileWithPath);
//Copy file at destination
FileUtils.copyFile(SrcFile, DestFile);
}
this.takeSnapShot(driver, vars.get("pic_location")+"HomePage.png") ;

- Forcing clicks when everything else fails:
Sometimes a button would simply not be clickable so as a workaround implement an action so it simulates a mouse click. Requires an action to be defined. 
import org.openqa.selenium.interactions.Actions;
Actions builder = new Actions(driver);
builder.moveToElement(driver.findElement(By.id("saveSubscriber")), 0, 0).click().build().perform();

 - A final check in jMeter
In case a selenium action fails the whole sampler is marked as failure. but test might fail due to your assertions. In case you don't want to search jmeter logs for multiple error strings I added a log in case sampler is marked as failure in a JSR Assertion Sampler added as a child to the JSR Sampler running the selenium scripts.  
String StartProcessResponseCode = SampleResult.getResponseCode();
log.info('Assertion Start')
try {
    assert StartProcessResponseCode.contains('200') : "SeleniumTestInvalidated: A web element was not found or could not be clicked."
}
catch (AssertionError e) {
    log.info(e.getMessage())
}
log.info('Assertion Ended for :' + StartProcessResponseCode)

MST-ALL Master BKP.jmx





]]>
Sun, 14 Feb 2021 04:24:04 +0000
<![CDATA[SoapUI Certification Course Content]]> SOAPUI COURSE CONTENT
 
1.Introduction to Web Services 
2.Create the Web Services using SOA 
3.Exploring Soap UI Tool Basic Features 
4.Soap UI Testing Features 
5.Web Services Automation Testing Using Soap UI 
6.Introduction to REST API 
7.REST API Automation Testing with Soap UI 
8.REST API Json Assertions and Validations 
9.Load Testing on REST API 
10.Soap UI PRO Tool Extra Features 
11.Security Testing with Soap UI 
12.Practice Test & Interview Questions

]]>
Mon, 08 Feb 2021 00:45:27 +0000
<![CDATA[JavaScript Course Content]]>

JavaScript Course Content

1.Introduction 

2.Features of Js 

3.How many ways we can add Js to Html Pages? 

4.How many types of declarations? 

5.Data types 

6.Operators 

7.Conditions 

8.Loops 

9.Type of Errors? 

10.How to debug our code 

11.What is Function and how many ways we create? 

12.How many types of Scopes? 

13.What is Hoisting? 

14.What is Closures? 

15.What is array? Explain Array methods ? 

16.What is Obj and how many ways we created? 

17.Explain String methods. 

18.Explain Number methods. 

19.Forms with Validations 

20.DOM 

21.Some keywords 

22.Logical Programs 

23.Reg expressions 

24.Display Properties 

25.Prototypes



]]>
Mon, 08 Feb 2021 00:20:25 +0000
<![CDATA[ISTQB Question and Answers (Advanced Level)]]> Please find the ISTQB Interview Question and answers for freshers and experienced candidates.

 Hope this will help someone.

 

]]>
Sun, 07 Feb 2021 23:51:49 +0000
<![CDATA[Why most mobile testing is not continuous?]]>
The main challenges in achieving continuous testing as part of your mobile development 
The key to achieving continuous testing is a robust test automation infrastructure which includes an easy to maintain framework, high availability testing environment and reporting. The main reasons enterprises are not connecting their mobile device testing to their CI/CD process and practicing continuous testing are the following: Mobile testing is a manual process Mobile tests execution time is long Maintenance is high. Results are flaky Infrastructure

More to read at https://21labs.io/why-most-mobile-testing-is-not-continuous/
]]>
Thu, 04 Feb 2021 15:54:38 +0000
<![CDATA[8 Common Mistakes When Planning and Documenting Your Tests]]>

1. Lack of detailed steps
2. Initial state is not specified 
3. Desired outcome not specified
4. Testing only positive test cases
5. Testing multiple functionalities in the same test case
6. No reusable tests
7. Testing a single user profile
8. Dependent test cases are not linked

More to read at https://21labs.io/documenting-your-tests/
]]>
Thu, 04 Feb 2021 15:52:07 +0000
<![CDATA[ZeuZ: Test automation framework for Web, Mobile, Desktop, API, and Cloud apps]]>

ZeuZ Automation Solutionz is a robust test automation framework for Web, Mobile, Desktop, API, and Cloud apps. It is a scriptless, easy to use, all-in-one tool that allows testers to create complex workflows across all platforms in one single step. It supports continuous testing and integrates with CI/CD tools such as JIRA, GIT, Jenkins, TeamCity, and many more.

ZeuZ’s modern architecture allows teams to automate tests on-premises, to multiple VMS, and in the cloud. Manual and automation experts can easily create Functional, Regression, Smoke, Visual, and Performance tests at a fraction of the cost. It also allows for Manual test management, Debugging, Rich reporting and Visibility across all tests.

Key Features:

  • Low learning curve, 100% scriptless, AI-supported.
  • A single test can cover all or is a combination of Web, Mobile, Desktop, API, and Cloud apps.
  • CI/CD ready i.e. integrate with your favorite DevOps tools.
  • Rich reports and notifications, with screenshot and logs.
  • End-to-end: Manual test management, Bugs, Requirements, and Features.
  • Fast troubleshooting and Debugging tools.
  • Highly customizable with custom code and libraries.
  • Enterprise-grade and Flexible deployment (on-premises, multiple VMs, in the cloud).
  • Stress/Performance Testing.
  • Image/video/audio Comparison and Testing.
  • Flexible: Able to test legacy applications and modern AI apps and Robotics.
  • Extensive support, On-demand resources, How-to videos, etc
]]>
Thu, 04 Feb 2021 15:41:20 +0000
<![CDATA[Testsigma: Test web, mobile apps, and APIs continuously @ DevOps speed]]>

Testsigma is among the best Automation Testing tools available today and has marked the beginning of a new era of smart automation that is best suited for today’s Agile and DevOps market.

Testsigma is an AI-driven test automation tool that uses simple English to automate even complex tests and well meets the continuous delivery needs. Testsigma provides a test automation ecosystem with all the elements required for continuous testing and lets you automate Web, mobile applications and API services and supports thousands of device/OS/browser combos on the cloud as well as on your local machines.

See how Testsigma is unique and how this AI-driven automation software meets your automation requirements in a demo.

]]>
Thu, 04 Feb 2021 15:39:45 +0000
<![CDATA[Mobile User Equipment Tester]]>
  • Scalable architecture
  • 3GPP release 8 - 15
  • Includes PHY, MAC, RLC, PDCP, RRC, and NAS layers
  • IP traffic generator (ping, UDP, HTTP)
  • Supports Valid8 VoIP Load Tester (includes VoLTE)
  • UE data pass-thru
  • Automation API
  • Voice Quality Measurement (VQM) for QoS
  • FDD and TDD
  • Carrier Aggregation
]]>
Thu, 04 Feb 2021 15:25:24 +0000
<![CDATA[Types of Performance Testing]]> Endurance Testing:This is done to check if an app can withstand the load that it is expected to endure for a long period of time.

Scalability Testing:This is done to check the performance of an app at maximum load and minimum load at software, hardware, and database level.

Load Testing:In this, the system simulates actual user load on an app to check the threshold for the maximum load the app can bear.

Stress Testing:This is done to check the reliability, stability, and error handling of an app under extreme load conditions.

Spike Testing: In this, an app is tested with sudden increment and decrement in the user load. By performing spike testing, we also get to know the recovery time for an app to stabilize.

Volume Testing:This is done to analyze an app’s behavior and response time when flooded with a large amount of data.

Compatibility Testing

Compatibility testing is performed to make sure that the app works as expected on various hardware, operating systems, network environments, and screen sizes.

Security Testing

Security testing is the most important part of the mobile app testing process and it ensures that your app is secure and not vulnerable to any external threat like malware and viruses. By doing this we can figure out the loopholes in the app which might lead to loss of data, revenue, or even trust in the organization.

]]>
Sun, 17 Jan 2021 22:47:20 +0000
<![CDATA[Byte of Python]]> For Python version 3
This book will teach you to use Python version 3. There will also be guidance for you to adapt to the older and more common
Python version 2 in the book.
Who reads A Byte of Python?
Here are what people are saying about the book:
This is the book that got me into programming almost a decade ago. Thank you @swaroopch. You changed my life. --
Stefan Froelich
I am writing this email to thank you for the great help your book has done for me! It was a really good book that I enjoyed
thoroughly. As a 15 year old who has never done programming before, trying to learn Python online was difficult and I
couldn't understand anything. But I felt like your book gave was much easier to understand and eased me into the whole
new world of programming. Thanks to you, I can now write a high level language with ease. I thought programming would
be hard and boring, but with your book's help, I realised how fun and interesting yet simple it can be! I would like to thank
you again for your hard work on helping out beginners like me. --
Prottyashita Tahiyat on Sep 17, 2019
This is the best beginner's tutorial I've ever seen! Thank you for your effort. --
Walt Michalik
The best thing i found was "A Byte of Python", which is simply a brilliant book for a beginner. It's well written, the
concepts are well explained with self evident examples. --
Joshua Robin
Excellent gentle introduction to programming #Python for beginners -- Shan Rajasekaran
start to love python with every single page read -- Herbert Feutl
perfect beginners guide for python, will give u key to unlock magical world of python -- Dilip
I should be doing my actual "work" but just found "A Byte of Python". A great guide with great examples. -- Biologist
John
Recently started reading a Byte of python. Awesome work. And that too for free. Highly recommended for aspiring
pythonistas. --
Mangesh
A Byte of Python, written by Swaroop. (this is the book I'm currently reading). Probably the best to start with, and
probably the best in the world for every newbie or even a more experienced user. --
Apostolos
Enjoying Reading #ByteOfPython by @swaroopch best book ever -- Yuvraj Sharma
A Byte of Python by @swaroopch is still the "Best newbie guide to python" -- Nickson Kaigi
Thank you so much for writing A Byte Of Python. I just started learning how to code two days ago and I'm already
building some simple games. Your guide has been a dream and I just wanted to let you know how valuable it has been. --
Franklin
Introduction

Download here: https://drive.google.com/file/d/1ct68yJbJwvilk1_A3TjBg1E7SCRLL4dC/view?usp=sharing]]>
Thu, 28 May 2020 19:36:34 +0000
<![CDATA[How to Build E2E Test Cases]]> Developing end-to-end (E2E) test cases poses a substantial challenge compared to writing unit and API test cases. Blocks of code and APIs can be tested against a well-defined, limited, and predetermined set of business rules. Test-driven-development (TDD) techniques can empower developers to write relevant tests alongside their code.

Developing E2E tests requires a fully different approach. This level of testing is meant to replicate user behavior that’s interacting with many blocks of code and multiple APIs simultaneously. Below we recommend a process that will help you build accurate, effective test cases for your E2E testing regime. Note that we will not cover test scripting here, but only test case development.

Here are the four major considerations to explore:

How to Scope E2E Testing

The goal of E2E testing is to make sure that users can use your application without running into trouble. Usually, this is done by running automated E2E regression tests against said application. One approach to choosing your scope could be to test every possible way users could use an application. This would certainly represent true 100% coverage. Unfortunately, it would also yield a testing codebase even larger than the product codebase, and a test runtime that is likely as long as it takes to write the build being tested.

Senior QA Engineers are often used to determining scope as well. Combining experience, knowledge of the code-base, and knowledge of the web app’s business metrics, a QA engineer can propose tests that should stop your users from encountering bugs when performing high-value actions.

Unfortunately, “should” is the weakness of this approach: biased understanding of the web app, cost, and reliance on one individual leads inevitably to bugs making their way into production.

The team should therefore instead test only how users are actually using the application. Doing so yields the optimal balance of achieving thorough test coverage without expending excessive resources or runtime or relying on an expert to predict how customers use the website.

This approach requires user data, rather than an expansive exploration of the different feature options in the application, to manage. To mine user data, you’ll need to use some form of product analytics to understand how your users currently use your application.

What Bugs to Target

E2E testing should not replace or substantially repeat the efforts of unit and API testing. Unit and API testing should test business logic. Generally, a unit test ensures that a block of code always results in the correct output variable(s) for given input variable(s). An API test ensures that for a given call, the correct response occurs.

E2E testing is meant to ensure that user interactions always work and that a user can complete a workflow successfully. E2E test validations should therefore make certain that an interaction point (button, form, page, etc) exists and can be used.

Then, they should verify that a user can move through all of these interactions and, at the end, the application returns what is expected in both the individual elements and also the result of user-initiated data transformations.

Well-built tests will also look for javascript or browser errors. If tests are written in this way, the relevant blocks of code and APIs will all be tested for functionality during the test execution.

Which User Flows to Follow

The risk of a bloated test suite, beyond high maintenance cost, is the runtime that grows too long for tests to be run in the deployment process for each build. If you keep runtimes to only a few minutes, you can test every build and provide developers immediate feedback about what they may have broken, so they can rapidly fix the bug.

To prevent test suite bloat, we suggest splitting your test cases into two groups: core and edge. Core test cases are meant to reflect your core features—what people are doing repeatedly. These are usually associated with revenue or bulk usability; a significant number of users are doing them, so if they fail you’re in trouble. Edge cases are the ways that people use the application that are unexpected, unintended, or rare, but might still break the application in an important way.

The testing team will need to pick and choose which of these cases to include based on business value. It’s important to be careful of writing edge case tests for every edge bug that occurs. Endlessly playing “whack a mole” can again cause the edge test suite to become bloated and excessively resource-intensive to maintain.

If runtime allows, we recommend running your core and edge tests with every build. Failing that, we recommend running core feature tests with every build, and running the longer-runtime edge case tests occasionally, in order to provide feedback on edge case bugs at a reasonable frequency.

How to Design Test Cases

Every test case, whether it is core or edge, should focus on a full and substantial user experience. At the end of a passing test, you should be certain that the user will have completed a given task to their satisfaction.

Each test will be a series of interactions, with an element on the page: a link, a button, a form, a drawing element, etc. For each element, the test should validate that it exists and that it can be interacted with. Between interactions, the test writer should look for meaningful changes in the application on the DOM that indicate whether or not the application has responded in the accepted way.

Finally, the data in the test (an address, a product selected, some other string or variable entered into the test) should be used to ensure that the test transforms or returns that data in the way that it’s expected to.

If you build E2E test cases with this process in mind, you will achieve high-fidelity continuous testing of your application without pouring unnecessary or marginally-valuable hours into maintaining the suite. You will be able to affordably ensure that users can use your application in the way they intend to do so.

Written by Dan Widing, Co-founder and CEO of ProdPerfect

]]>
Thu, 28 May 2020 19:22:45 +0000
<![CDATA[[White Paper] Delivering better software using Test Automation]]> When companies today are trying to ship as fast as possible, test automation is becoming more and more essential. According to a recent survey, 85% percent of organizations have some kind of test automation in place. It’s being used mostly for functional or regression testing but also for load testing, unit testing and Continuous Integration. Similar to the build and deploy phases, testing is another part of the Continuous Integration and Continuous Deployment pipelines capable of being automated .
Read more >>>
https://drive.google.com/file/d/1hDqyuKLu69DQJHZ1KZbicC66guF_mizp/view?usp=sharing
]]>
Thu, 28 May 2020 19:21:34 +0000
<![CDATA[[Whitepaper] How to choose the right API Testing Solution]]> There’s no question that API testing is integral for identifying defects at multiple layers of your application and ensuring a seamless customer experience. But there are many different approaches and tools available on the market. How do you get the ROI you’re looking for to achieve the automation necessary to deliver high quality software at the speed of Agile and DevOps initiatives?
Here your answers https://drive.google.com/file/d/1n502NM4pNUgKBO1Ox4wgHC_YmF0cLbjY/view?usp=sharing
]]>
Thu, 28 May 2020 07:38:16 +0000
<![CDATA[[Whitepaper] How to choose the right API Testing Solution]]> There’s no question that API testing is integral for identifying defects at multiple layers of your application and ensuring a seamless customer experience. But there are many different approaches and tools available on the market. How do you get the ROI you’re looking for to achieve the automation necessary to deliver high quality software at the speed of Agile and DevOps initiatives? 
 

]]>
Thu, 28 May 2020 07:34:51 +0000
<![CDATA[TestOps Introduction]]> TestOps means that there are no walls, gates or transitions between testing and operations. The two processes are integrated into a single entity aimed at producing the best software system as quickly and efficiently as possible and the key is ownership and accountability. The client knows they have a team who is accountable for the quality assurance of the whole process from end-to-end: requirements, through development, testing and into production. The end result is much more efficient and effective.
]]>
Mon, 23 Mar 2020 06:04:37 +0000
<![CDATA[Katalon TestOps OnPremise (KTOP): TestOps Tool]]> Katalon TestOps OnPremise (KTOP) is a web-based application that provides dynamic perspectives and an insightful look at your automation testing data in a restricted network environment. You can leverage your automation testing data by transforming and visualizing your data, analyzing test results, and seamlessly integrating with such tools as Katalon Studio and Jira.

Benefits

  • Save time spent reviewing and analyzing test execution results.
  • Provide insightful visualizations on critical testing data.
  • Quickly troubleshoot your testing process and identify defects by locating exactly which test case failed.
  • Smoothly integrate with your issues and releases in Jira.
  • Maximize your testing capacity.

Key Features

  • Install and setup on your machine privately.
  • Configure mail server to send and receive notification about projects.
  • Import KTOP license generated from Katalon TestOps.
  • And other features of KT are also included in KTOP
]]>
Mon, 23 Mar 2020 06:03:02 +0000