Search This Blog

Wednesday 14 December 2011

Testing: Very Useful Questions for job seekers


Black box testing
not based on any knowledge of internal design or code. Tests are based on requirements and functionality.
White box testing
based on knowledge of the internal logic of an application’s code. Tests are based on coverage of code statements, branches, paths, conditions.
Unit testing
the most ‘micro’ scale of testing; to test particular functions or code modules. Typically done by the programmer and not by testers, as it requires detailed knowledge of the internal program design and code. Not always easily done unless the application has a well-designed architecture with tight code; may require developing test driver modules or test harnesses.
Incremental integration testing
continuous testing of an application as new functionality is added; requires that various aspects of an application’s functionality be independent enough to work separately before all parts of the program are completed, or that test drivers be developed as needed; done by programmers or by testers.
Integration testing
testing of combined parts of an application to determine if they function together correctly. The ‘parts’ can be code modules, individual applications, client and server applications on a network, etc. This type of testing is especially relevant to client/server and distributed systems.
Functional testing
black-box type testing geared to functional requirements of an application; this type of testing should be done by testers. This doesn’t mean that the programmers shouldn’t check that their code works before releasing it (which of course applies to any stage of testing.)
System testing
black box type testing that is based on overall requirement specifications; covers all combined parts of a system.
End-to-end testing
similar to system testing; the ‘macro’ end of the test scale; involves testing of a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate.
Sanity testing
typically an initial testing effort to determine if a new software version is performing well enough to accept it for a major testing effort. For example, if the new software is crashing systems every 5 minutes, bogging down systems to a crawl, or destroying databases, the software may not be in a ’sane’ enough condition to warrant further testing in its current state.
Regression testing
re-testing after fixes or modifications of the software or its environment. It can be difficult to determine how much re-testing is needed, especially near the end of the development cycle. Automated testing tools can be especially useful for this type of testing.
Acceptance testing
final testing based on specifications of the end-user or customer, or based on use by end-users/customers over some limited period of time.
Load testing
testing an application under heavy loads, such as testing of a web site under a range of loads to determine at what point the systems response time degrades or fails.
Stress testing
term often used interchangeably with ‘load’ and ‘performance’ testing. Also used to describe such tests as system functional testing while under unusually heavy loads, heavy repetition of certain actions or inputs, input of large numerical values, large complex queries to a database system, etc.
Performance testing
term often used interchangeably with ’stress’ and ‘load’ testing. Ideally ‘performance’ testing (and any other ‘type’ of testing) is defined in requirements documentation or QA or Test Plans.
Usability testing
testing for ‘user-friendliness’. Clearly this is subjective, and will depend on the targeted end-user or customer. User interviews, surveys, video recording of user sessions, and other techniques can be used. Programmers and testers are usually not appropriate as usability testers.
Install/uninstall testing
testing of full, partial, or upgrade install/uninstall processes.
Recovery testing
testing how well a system recovers from crashes, hardware failures, or other catastrophic problems.
Security testing
testing how well the system protects against unauthorized internal or external access, willful damage, etc; may require sophisticated testing techniques.
Compatibility testing
testing how well software performs in a particular hardware/software/operating system/network/etc. environment.
Exploratory testing
often taken to mean a creative, informal software test that is not based on formal test plans or test cases; testers may be learning the software as they test it.
Ad-hoc testing
similar to exploratory testing, but often taken to mean that the testers have significant understanding of the software before testing it.
User acceptance testing
determining if software is satisfactory to an end-user or customer.
Comparison testing
comparing software weaknesses and strengths to competing products.
Alpha testing
testing of an application when development is nearing completion; minor design changes may still be made as a result of such testing. Typically done by end-users or others, not by programmers or testers.
Beta testing
testing when development and testing are essentially completed and final bugs and problems need to be found before final release. Typically done by end-users or others, not by programmers or testers.
Mutation testing
a method for determining if a set of test data or test cases is useful, by deliberately introducing various code changes (’bugs’) and retesting with the original test data/cases to determine if the ‘bugs’ are detected. Proper implementation requires large computational resources.

Widely Asking Testing Interview Questions

Testcase vs Usecase

use case is a: A highlevel scenario where you specify the functionality of the application from a business perspective.
Whereas a test case is: The implementation of the highlevel scenario(usecase) wherein one gives detailed and step-by-step account of procedures to test a particular functionality of the application. Things get lot technical here.

A use case describes an entire flow of intercation that the user has with the system/application for e.g. a user logging into the system and searching for a flight, booking it and then logging out is a use case. There are multiple ways a user can interact with a system, and they all map to positive/ negative use cases. 

Test cases are written on the basis of use cases. The test cases check if the various functionalities  that the user uses to interact with the system is working fine or not.

Test matrix VS Test metrix

Test matrix: Tester will write test matrix in test specification document which keep track of testing flow, testing type and test cases activities etc.
Test Metrix: This will define the scale up to What level of testing can be achieved by doing particular testing on application in the scale of 100% testing meter?

Web testing Vs GUI testing

Web testing is Server side testing and GUI Testing is Client Side Testing

GUI testing is the part of web testing as well as desktop testing.

In GUI testing we check the graphical user interface that is Font size, font color, links, labels etc.

Web testing means it is an 3 tier architecture, here we check the performance of the application (volume, load, stress). Here we do the compatibility testing, user interface testing etc.

Explain V model in detail?

"V" Model is one of the SDLC Methodologies.
In this methodology Development and Testing takes place at the same time with the same kind of information in their hands. Typical "V" shows Development Phases on the Left hand side and Testing Phases on the Right hand side.
Development Team follow "Do-Procedure" to achive the goals of the company and Testing Team follow "check-Procedure" to verify them.
SRS                                                                            User Acceptance
         Design                                                      System Testing
                     HLD                                       Integration Testing
    SDLC                   LLD                    Unit Testing       STLC
                                         Coding

web application Vs client server application testing


S. No

Client   Server Testing

Web Based Testing
1
This involves testing of the client installation, connectivity between the Client and the Server, configuration required to connect to the server apart from the regular GUI, functional testing. 
You will be having specific client version for specific OS.
Here client is your browser, you can skip the installation testing, checking the connectivity between browser and your server etc but you have a addition of testing different types of browsers and on different platforms.

2
Client Server Application are those that runs on LAN, WAN but is not connected to www or internet
The web based Applications  are those that are connected with www, internet. Ex: Yahoo and Google etc
3
It Uses Connection Oriented (TCP/IP) Protocol
It Uses HTTP Protocol
4
Ex:
Your going to your bank and asking a/c balance to customer server. He/she use banking system application to check you're a/c balance – that is C/S application.
Also you can go to net cafe and check you a/c balance that is Web application. You can imagine the facilities. What all the customer service executive can do in his/her system and what all you can do from you net cafe
5
C-S testing needs a network
The Web test can be performed on a stand alone machine.
6
In Client/Server testing, test phases can include :
Build Acceptance Testing
Prototype  Testing
System Reliability Testing
Multiple phases of regression testing, and beta testing.

For web based testing –
Validation the HTML,
Check for broken links,
Speed - Access the site,
Browser independence - Try different browsers,
Check printed pages,
performance
sessions

Client/Server Application:-
             It is 2 tier architecture.
             It servers restricted number of users.
             FTP will process the request which is given by the client.
   Advantages:-High Performance
   Disadvantage:-Hard to maintain.

Web Server:-
           It is min 3 tier architecture.
           It servers n number of users.
           Browser will process the request.
    Disadvantage:-Slow Performance 

System Vs End-to-End Testing

System testing - black-box type testing that is based on overall requirements specifications; covers all combined parts of a system.
End-to-end testing - similar to system testing; the 'macro' end of the test scale; involves testing of a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate.

System Testing can be limited up to functional as well as non functional testing of the System, but when we talk about end-to-end testing, all the interfaces, sub-systems must come into picture.  All the end-to-end scenarios should be considered and executed before deployment of the system into actual environment.


Smoke Vs Sanity testing

Sanity Testing:

This testing will be done by the tester to check whether the new application is ready for the major testing effort.

Smoke testing :

this just like sanity testing, but this testing will be done by the developer before releasing to the tester for the further testing.

LoadRunner interview questions


  1. What is load testing? - Load testing is to test that if the application works fine with the loads that result from large number of simultaneous users, transactions and to determine weather it can handle peak usage periods.
  2. What is Performance testing? - Timing for both read and update transactions should be gathered to determine whether system functions are being performed in an acceptable timeframe. This should be done standalone and then in a multi user environment to determine the effect of multiple transactions on the timing of a single transaction.
  3. Did u use LoadRunner? What version? - Yes. Version 7.2.
  4. Explain the Load testing process? -
    Step 1: Planning the test. Here, we develop a clearly defined test plan to ensure the test scenarios we develop will accomplish load-testing objectives. Step 2: Creating Vusers. Here, we create Vuser scripts that contain tasks performed by each Vuser, tasks performed by Vusers as a whole, and tasks measured as transactions. Step 3: Creating the scenario. A scenario describes the events that occur during a testing session. It includes a list of machines, scripts, and Vusers that run during the scenario. We create scenarios using LoadRunner Controller. We can create manual scenarios as well as goal-oriented scenarios. In manual scenarios, we define the number of Vusers, the load generator machines, and percentage of Vusers to be assigned to each script. For web tests, we may create a goal-oriented scenario where we define the goal that our test has to achieve. LoadRunner automatically builds a scenario for us. Step 4: Running the scenario.
    We emulate load on the server by instructing multiple Vusers to perform tasks simultaneously. Before the testing, we set the scenario configuration and scheduling. We can run the entire scenario, Vuser groups, or individual Vusers. Step 5: Monitoring the scenario.
    We monitor scenario execution using the LoadRunner online runtime, transaction, system resource, Web resource, Web server resource, Web application server resource, database server resource, network delay, streaming media resource, firewall server resource, ERP server resource, and Java performance monitors. Step 6: Analyzing test results. During scenario execution, LoadRunner records the performance of the application under different loads. We use LoadRunner’s graphs and reports to analyze the application’s performance.
  5. When do you do load and performance Testing? - We perform load testing once we are done with interface (GUI) testing. Modern system architectures are large and complex. Whereas single user testing primarily on functionality and user interface of a system component, application testing focuses on performance and reliability of an entire system. For example, a typical application-testing scenario might depict 1000 users logging in simultaneously to a system. This gives rise to issues such as what is the response time of the system, does it crash, will it go with different software applications and platforms, can it hold so many hundreds and thousands of users, etc. This is when we set do load and performance testing.
  6. What are the components of LoadRunner? - The components of LoadRunner are The Virtual User Generator, Controller, and the Agent process, LoadRunner Analysis and Monitoring, LoadRunner Books Online.
  7. What Component of LoadRunner would you use to record a Script? - The Virtual User Generator (VuGen) component is used to record a script. It enables you to develop Vuser scripts for a variety of application types and communication protocols.
  8. What Component of LoadRunner would you use to play Back the script in multi user mode? - The Controller component is used to playback the script in multi-user mode. This is done during a scenario run where a vuser script is executed by a number of vusers in a group.
  9. What is a rendezvous point? - You insert rendezvous points into Vuser scripts to emulate heavy user load on the server. Rendezvous points instruct Vusers to wait during test execution for multiple Vusers to arrive at a certain point, in order that they may simultaneously perform a task. For example, to emulate peak load on the bank server, you can insert a rendezvous point instructing 100 Vusers to deposit cash into their accounts at the same time.
  10. What is a scenario? - A scenario defines the events that occur during each testing session. For example, a scenario defines and controls the number of users to emulate, the actions to be performed, and the machines on which the virtual users run their emulations.
  11. Explain the recording mode for web Vuser script? - We use VuGen to develop a Vuser script by recording a user performing typical business processes on a client application. VuGen creates the script by recording the activity between the client and the server. For example, in web based applications, VuGen monitors the client end of the database and traces all the requests sent to, and received from, the database server. We use VuGen to: Monitor the communication between the application and the server; Generate the required function calls; and Insert the generated function calls into a Vuser script.
  12. Why do you create parameters? - Parameters are like script variables. They are used to vary input to the server and to emulate real users. Different sets of data are sent to the server each time the script is run. Better simulate the usage model for more accurate testing from the Controller; one script can emulate many different users on the system.
  13. What is correlation? Explain the difference between automatic correlation and manual correlation? - Correlation is used to obtain data which are unique for each run of the script and which are generated by nested queries. Correlation provides the value to avoid errors arising out of duplicate values and also optimizing the code (to avoid nested queries). Automatic correlation is where we set some rules for correlation. It can be application server specific. Here values are replaced by data which are created by these rules. In manual correlation, the value we want to correlate is scanned and create correlation is used to correlate.
  14. How do you find out where correlation is required? Give few examples from your projects? - Two ways: First we can scan for correlations, and see the list of values which can be correlated. From this we can pick a value to be correlated. Secondly, we can record two scripts and compare them. We can look up the difference file to see for the values which needed to be correlated.  In my project, there was a unique id developed for each customer, it was nothing but Insurance Number, it was generated automatically and it was sequential and this value was unique. I had to correlate this value, in order to avoid errors while running my script. I did using scan for correlation.
  15. Where do you set automatic correlation options? - Automatic correlation from web point of view can be set in recording options and correlation tab. Here we can enable correlation for the entire script and choose either issue online messages or offline actions, where we can define rules for that correlation. Automatic correlation for database can be done using show output window and scan for correlation and picking the correlate query tab and choose which query value we want to correlate. If we know the specific value to be correlated, we just do create correlation for the value and specify how the value to be created.
  16. What is a function to capture dynamic values in the web Vuser script? - Web_reg_save_param function saves dynamic data information to a parameter.
  17. When do you disable log in Virtual User Generator, When do you choose standard and extended logs? - Once we debug our script and verify that it is functional, we can enable logging for errors only. When we add a script to a scenario, logging is automatically disabled. Standard Log Option: When you selectStandard log, it creates a standard log of functions and messages sent during script execution to use for debugging. Disable this option for large load testing scenarios. When you copy a script to a scenario, logging is automatically disabled Extended Log Option: Selectextended log to create an extended log, including warnings and other messages. Disable this option for large load testing scenarios. When you copy a script to a scenario, logging is automatically disabled. We can specify which additional information should be added to the extended log using the Extended log options.
  18. How do you debug a LoadRunner script? - VuGen contains two options to help debug Vuser scripts-the Run Step by Step command and breakpoints. The Debug settings in the Options dialog box allow us to determine the extent of the trace to be performed during scenario execution. The debug information is written to the Output window. We can manually set the message class within your script using the lr_set_debug_message function. This is useful if we want to receive debug information about a small section of the script only.
  19. How do you write user defined functions in LR? Give me few functions you wrote in your previous project? - Before we create the User Defined functions we need to create the external
    library (DLL) with the function. We add this library to VuGen bin directory. Once the library is added then we assign user defined function as a parameter. The function should have the following format: __declspec (dllexport) char* <function name>(char*, char*)Examples of user defined functions are as follows:GetVersion, GetCurrentTime, GetPltform are some of the user defined functions used in my earlier project.
  20. What are the changes you can make in run-time settings? - The Run Time Settings that we make are: a) Pacing - It has iteration count. b) Log - Under this we have Disable Logging Standard Log and c) Extended Think Time - In think time we have two options like Ignore think time and Replay think time. d) General - Under general tab we can set the vusers as process or as multithreading and whether each step as a transaction.
  21. Where do you set Iteration for Vuser testing? - We set Iterations in the Run Time Settings of the VuGen. The navigation for this is Run time settings, Pacing tab, set number of iterations.
  22. How do you perform functional testing under load? - Functionality under load can be tested by running several Vusers concurrently. By increasing the amount of Vusers, we can determine how much load the server can sustain.
  23. What is Ramp up? How do you set this? - This option is used to gradually increase the amount of Vusers/load on the server. An initial value is set and a value to wait between intervals can be
    specified. To set Ramp Up, go to ‘Scenario Scheduling Options’
  24. What is the advantage of running the Vuser as thread? - VuGen provides the facility to use multithreading. This enables more Vusers to be run per
    generator. If the Vuser is run as a process, the same driver program is loaded into memory for each Vuser, thus taking up a large amount of memory. This limits the number of Vusers that can be run on a single
    generator. If the Vuser is run as a thread, only one instance of the driver program is loaded into memory for the given number of
    Vusers (say 100). Each thread shares the memory of the parent driver program, thus enabling more Vusers to be run per generator.
  25. If you want to stop the execution of your script on error, how do you do that? - The lr_abort function aborts the execution of a Vuser script. It instructs the Vuser to stop executing the Actions section, execute the vuser_end section and end the execution. This function is useful when you need to manually abort a script execution as a result of a specific error condition. When you end a script using this function, the Vuser is assigned the status "Stopped". For this to take effect, we have to first uncheck the “Continue on error” option in Run-Time Settings.  
  26. What is the relation between Response Time and Throughput? - The Throughput graph shows the amount of data in bytes that the Vusers received from the server in a second. When we compare this with the transaction response time, we will notice that as throughput decreased, the response time also decreased. Similarly, the peak throughput and highest response time would occur approximately at the same time.
  27. Explain the Configuration of your systems? - The configuration of our systems refers to that of the client machines on which we run the Vusers. The configuration of any client machine includes its hardware settings, memory, operating system, software applications, development tools, etc. This system component configuration should match with the overall system configuration that would include the network infrastructure, the web server, the database server, and any other components that go with this larger system so as to achieve the load testing objectives.
  28. How do you identify the performance bottlenecks? - Performance Bottlenecks can be detected by using monitors. These monitors might be application server monitors, web server monitors, database server monitors and network monitors. They help in finding out the troubled area in our scenario which causes increased response time. The measurements made are usually performance response time, throughput, hits/sec, network delay graphs, etc.
  29. If web server, database and Network are all fine where could be the problem? - The problem could be in the system itself or in the application server or in the code written for the application.
  30. How did you find web server related issues? - Using Web resource monitors we can find the performance of web servers. Using these monitors we can analyze throughput on the web server, number of hits per second that
    occurred during scenario, the number of http responses per second, the number of downloaded pages per second.
  31. How did you find database related issues? - By running “Database” monitor and help of “Data Resource Graph” we can find database related issues. E.g. You can specify the resource you want to measure on before running the controller and than you can see database related issues
  32. Explain all the web recording options?
  33. What is the difference between Overlay graph and Correlate graph? - Overlay Graph: It overlay the content of two graphs that shares a common x-axis. Left Y-axis on the merged graph show’s the current graph’s value & Right Y-axis show the value of Y-axis of the graph that was merged. Correlate Graph: Plot the Y-axis of two graphs against each other. The active graph’s Y-axis becomes X-axis of merged graph. Y-axis of the graph that was merged becomes merged graph’s Y-axis.
  34. How did you plan the Load? What are the Criteria? - Load test is planned to decide the number of users, what kind of machines we are going to use and from where they are run. It is based on 2 important documents, Task Distribution Diagram and Transaction profile. Task Distribution Diagram gives us the information on number of users for a particular transaction and the time of the load. The peak usage and off-usage are decided from this Diagram. Transaction profile gives us the information about the transactions name and their priority levels with regard to the scenario we are deciding.
  35. What does vuser_init action contain? - Vuser_init action contains procedures to login to a server.
  36. What does vuser_end action contain? - Vuser_end section contains log off procedures.  
  37. What is think time? How do you change the threshold? -   Think time is the time that a real user waits between actions. Example: When a user receives data from a server, the user may wait several seconds to review the data before responding. This delay is known as the think time. Changing the Threshold: Threshold level is the level below which the recorded think time will be ignored. The default value is five (5) seconds. We can change the think time threshold in the Recording options of the Vugen.
  38. What is the difference between standard log and extended log? - The standard log sends a subset of functions and messages sent during script execution to a log. The subset depends on the Vuser type Extended log sends a detailed script execution messages to the output log. This is mainly used during debugging when we want information about: Parameter substitution. Data returned by the server. Advanced trace.
  39. Explain the following functions: - lr_debug_message - The lr_debug_message function sends a debug message to the output log when the specified message class is set. lr_output_message - The lr_output_message function sends notifications to the Controller Output window and the Vuser log file. lr_error_message - The lr_error_message function sends an error message to the LoadRunner Output window. lrd_stmt - The lrd_stmt function associates a character string (usually a SQL statement) with a cursor. This function sets a SQL statement to be processed. lrd_fetch - The lrd_fetch function fetches the next row from the result set.
  40. Throughput -  If the throughput scales upward as time progresses and the number of Vusers increase, this indicates that the bandwidth is sufficient. If the graph were to remain relatively flat as the number of Vusers increased, it would
    be reasonable to conclude that the bandwidth is constraining the volume of
    data delivered. 
  41. Types of Goals in Goal-Oriented Scenario -  Load Runner provides you with five different types of goals in a goal oriented scenario:
    • The number of concurrent Vusers
    • The number of hits per second
    • The number of transactions per second
    • The number of pages per minute
    • The transaction response time that you want your scenario
  42. Analysis Scenario (Bottlenecks): In Running Vuser graph correlated with the response time graph you can see that as the number of Vusers increases, the average response time of the check itinerary transaction very gradually increases. In other words, the average response time steadily increases as the load
    increases. At 56 Vusers, there is a sudden, sharp increase in the average response
    time. We say that the test broke the server. That is the mean time before failure (MTBF). The response time clearly began to degrade when there were more than 56 Vusers running simultaneously.
  43. What is correlation? Explain the difference between automatic correlation and manual correlation? - Correlation is used to obtain data which are unique for each run of the script and which are generated by nested queries. Correlation provides the value to avoid errors arising out of duplicate values and also optimizing the code (to avoid nested queries). Automatic correlation is where we set some rules for correlation. It can be application server specific. Here values are replaced by data which are created by these rules. In manual correlation, the value we want to correlate is scanned and create correlation is used to correlate.
  44. Where do you set automatic correlation options? - Automatic correlation from web point of view, can be set in recording options and correlation tab. Here we can enable correlation for the entire script and choose either issue online messages or offline actions, where we can define rules for that correlation.  Automatic correlation for database, can be done using show output window and scan for correlation and picking the correlate query tab and choose which query value we want to correlate. If we know the specific value to be correlated, we just do create correlation for the value and specify how the value to be created.
  45. What is a function to capture dynamic values in the web vuser script? - Web_reg_save_param function saves dynamic data information to a parameter.

QTP interview questions



  1. What are the Features & Benefits of Quick Test Pro (QTP 8.0)? - Operates stand-alone, or integrated into Mercury Business Process Testing and Mercury Quality Center. Introduces next-generation zero-configuration Keyword Driven testing technology in Quick Test Professional 8.0 allowing for fast test creation, easier maintenance, and more powerful data-driving capability. Identifies objects with Unique Smart Object Recognition, even if they change from build to build, enabling reliable unattended script execution. Collapses test documentation and test creation to a single step with Auto-documentation technology. Enables thorough validation of applications through a full complement of checkpoints.
  2. How to handle the exceptions using recovery scenario manager in QTP? - There are 4 trigger events during which a recovery scenario should be activated. A pop up window appears in an opened application during the test run: A property of an object changes its state or value, A step in the test does not run successfully, An open application fails during the test run, These triggers are considered as exceptions.You can instruct QTP to recover unexpected events or errors that occurred in your testing environment during test run. Recovery scenario manager provides a wizard that guides you through the defining recovery scenario. Recovery scenario has three steps: 1. Triggered Events 2. Recovery steps 3. Post Recovery Test-Run
  3. What is the use of Text output value in QTP? - Output values enable to view the values that the application talks during run time. When parameterized, the values change for each iteration. Thus by creating output values, we can capture the values that the application takes for each run and output them to the data table.
  4. How to use the Object spy in QTP 8.0 version? - There are two ways to Spy the objects in QTP: 1) Thru file toolbar, In the File Toolbar click on the last toolbar button (an icon showing a person with hat). 2) True Object repository Dialog, In Object repository dialog click on the button object spy. In the Object spy Dialog click on the button showing hand symbol. The pointer now changes in to a hand symbol and we have to point out the object to spy the state of the object if at all the object is not visible. or window is minimized then, hold the Ctrl button and activate the required window to and release the Ctrl button.
  5. How Does Run time data (Parameterization) is handled in QTP? - You can then enter test data into the Data Table, an integrated spreadsheet with the full functionality of Excel, to manipulate data sets and create multiple test iterations, without programming, to expand test case coverage. Data can be typed in or imported from databases, spreadsheets, or text files.
  6. What is keyword view and Expert view in QTP? - Quick Test’s Keyword Driven approach, test automation experts have full access to the underlying test and object properties, via an integrated scripting and debugging environment that is round-trip synchronized with the Keyword View. Advanced testers can view and edit their tests in the Expert View, which reveals the underlying industry-standard VBScript that Quick Test Professional automatically generates. Any changes made in the Expert View are automatically synchronized with the Keyword View.
  7. Explain about the Test Fusion Report of QTP? - Once a tester has run a test, a Test Fusion report displays all aspects of the test run: a high-level results overview, an expandable Tree View of the test specifying exactly where application failures occurred, the test data used, application screen shots for every step that highlight any discrepancies, and detailed explanations of each checkpoint pass and failure. By combining Test Fusion reports with Quick Test Professional, you can share reports across an entire QA and development team.
  8. Which environments does QTP support? - Quick Test Professional supports functional testing of all enterprise environments, including Windows, Web,..NET, Java/J2EE, SAP, Siebel, Oracle, PeopleSoft, Visual Basic, ActiveX, mainframe terminal emulators, and Web services.
  9. What is QTP? - Quick Test is a graphical interface record-playback automation tool. It is able to work with any web, java or windows client application. Quick Test enables you to test standard web objects and ActiveX controls. In addition to these environments, Quick Test Professional also enables you to test Java applets and applications and multimedia objects on Applications as well as standard Windows applications, Visual Basic 6 applications and.NET framework

No comments:

Post a Comment